Search Results

Search found 913 results on 37 pages for 'targets'.

Page 31/37 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Social Engagement: One Size Doesn't Fit Anyone

    - by Mike Stiles
    The key to achieving meaningful social engagement is to know who you’re talking to, know what they like, and consistently deliver that kind of material to them. Every magazine for women knows this. When you read the article titles promoted on their covers, there’s no mistaking for whom that magazine is intended. And yet, confusion still reigns at many brands as to exactly whom they want to talk to, what those people want to hear, and what kind of content they should be creating for them. In most instances, the root problem is brands want to be all things to all people. Their target audience…the world! Good luck with that. It’s 2012, the age of aggregation and custom content delivery. To cope with the modern day barrage of information, people have constructed technological filters so that content they regard as being “for them” is mostly what gets through. Even if your brand is for men and women, young and old, you may want to consider social properties that divide men from women, and young from old. Yes, a man might find something in a women’s magazine that interests him. But that doesn’t mean he’s going to subscribe to it, or buy even one issue. In fact he’ll probably never see the article he’d otherwise be interested in, because in his mind, “This isn’t for me.” It wasn’t packaged for him. News Flash: men and women are different. So it’s a tall order to craft your Facebook Page or Twitter handle to simultaneously exude the motivators for both. The Harris Interactive study “2012 Connecting and Communicating Online: State of Social Media” sheds light on the differing social behaviors and drivers. -65% of women (vs. 59% of men) stay glued to social because they don’t want to miss anything. -25% of women check social when they wake up, before they check email. Only 18% of men check social before e-mail. -95% of women surveyed belong to Facebook vs. 86% of men. -67% of women log in to Facebook once a day or more vs. 54% of men. -Conventional wisdom is Pinterest is mostly a woman-thing, right? That may be true for viewing, but not true for sharing. Men are actually more likely to share on Pinterest than women, 23% to 10%. -The sharing divide extends to YouTube. 68% of women use it mainly for consumption, as opposed to 52% of men. -Women are as likely to have a Twitter account as men, but they’re much less likely to check it often. 54% of women check it once a week compared to 2/3 of men. Obviously, there are some takeaways from this depending on your target. Women don’t want to miss out on anything, so serialized content might be a good idea, right? Promotional posts that lead to a big payoff could keep them hooked. Posts for women might be better served first thing in the morning. If sharing is your goal, maybe male-targeted content is more likely to get those desired shares. And maybe Twitter is a better place to aim your male-targeted content than Facebook. Some grocery stores started experimenting with male-only aisles. The results have been impressive. Why? Because while it’s true men were finding those same items in the store just fine before, now something has been created just for them. They have a place in the store where they belong. Each brand’s strategy and targets are going to differ. The point is…know who you’re talking to, know how they behave, know what they like, and deliver content using any number of social relationship management targeting tools that meets their expectations. If, however, you’re committed to a one-size-fits-all, “our content is for everybody” strategy (or even worse, a “this is what we want to put out and we expect everybody to love it” strategy), your content will miss the mark for more often than it hits. @mikestilesPhoto via stock.schng

    Read the article

  • MSBuild: convert relative path in imported project to absolute path.

    - by Ergwun
    Short version: I have an MSBuild project that imports another project. There is a property holding a relative path in the imported project that is relative to the location of the imported project. How do I convert this relative path to be absolute? I've tried the ConvertToAbsolutePath task, but this makes it relative to the importing project's location). Long version: I'm trying out Robert Koritnik's MSBuild task for integrating nunit output into Visual Studio (see this other SO question for a link). Since I like to have all my tools under version control, I want the target file with the custom task in it to point to the nunit console application using a relative path. My problem is that this relative path ends up being made relative to the importing project. E.g. (in ... MyRepository\Third Party\NUnit\MSBuild.NUnit.Task.Source\bin\Release\MSBuild.NUnit.Task.Targets): ... <PropertyGroup Condition="'$(NUnitConsoleToolPath)' == ''"> <NUnitConsoleToolPath>..\..\..\NUnit 2.5.5\bin\net-2.0</> </PropertyGroup> ... <Target Name="IntegratedTest"> <NUnitIntegrated TreatFailedTestsAsErrors="$(NUnitTreatFailedTestsAsErrors)" AssemblyName="$(AssemblyName)" OutputPath="$(OutputPath)" ConsoleToolPath="$(NUnitConsoleToolPath)" ConsoleTool="$(NUnitConsoleTool)" /> </Target> ... The above target fails with the error that the file cannot be found (that is the nunit-console.exe file). Inside the NUnitIntegrated MSBuild task, when the the execute() method is called, the current directory is the directory of the importing project, so relative paths will point to the wrong location. I tried to convert the relative path to absolute by adding these tasks to the IntegratedTest target: <ConvertToAbsolutePath Paths="$(NUnitConsoleToolPath)"> <Output TaskParameter="AbsolutePaths" PropertyName="AbsoluteNUnitConsoleToolPath"/> </ConvertToAbsolutePath> but this just converted it to be relative to the directory of the project file that imports this target file. I know I can use the property $(MSBuildProjectDirectory) to get the directory of the importing project, but can't find any equivalent for directory of the imported target file. Can anyone tell me how a path in an imported file that is supposed to be relative to the directory that the imported file is in can be made absolute? Thanks!

    Read the article

  • DirectoryServicesCOMException when working with System.DirectoryServices.AccountManagement

    - by antik
    I'm attempting to determine whether a user is a member of a given group using System.DirectoryServices.AccountManagment. I'm doing this inside a SharePoint WebPart in SharePoint 2007 on a 64-bit system. Project targets .NET 3.5 Impersonation is enabled in the web.config. The IIS Site in question is using an IIS App Pool with a domain user configured as the identity. I am able to instantiate a PrincipalContext as such: PrincipalContext pc = new PrincipalContext(ContextType.Domain) Next, I try to grab a principal: using (PrincipalContext pc = new PrincipalContext(ContextType.Domain)) { GroupPrincipal group = GroupPrincipal.FindByIdentity(pc, "MYDOMAIN\somegroup"); // snip: exception thrown by line above. } Both the above and UserPrincipal.FindByIdentity with a user SAM throw a DirectoryServicesCOMException: "Logon failure: Unknown user name or bad password" I've tried passing in a complete SAMAccountName to either FindByIdentity (in the form of MYDOMAIN\username) or just the username with no change in behavior. I've tried executing the code with other credentials using both the HostingEnvironment.Impersonate and SPSecurity.RunWithElevatedPrivileges approaches and also experience the same result. I've also tried instantiating my context with the domain name in place: Principal Context pc = new PrincipalContext(ContextType.Domain, "MYDOMAIN"); This throws a PrincipalServerDownException: "The server could not be contacted." I'm working on a reasonably hardened server. I did not lock the system down so I am unsure exactly what has been done to it. If there are credentials I need to allocate to my pool identity's user or in the domain security policy in order for these to work, I can configure the domain accordingly. Are there any settings that would be preventing my code from running? Am I missing something in the code itself? Is this just not possible in a SharePoint web? EDIT: Given further testing, my code functions correctly when tested in a Console application targeting .NET 4.0. I targeted a different framework because I didn't have AccountManagement available to me in the console app when targeting .NET 3.5 for some reason. using (PrincipalContext pc = new PrincipalContext(ContextType.Domain)) using (UserPrincipal adUser = UserPrincipal.FindByIdentity(pc, "MYDOMAIN\joe.user")) using (GroupPrincipal adGroup = GroupPrincipal.FindByIdentity(pc, "MYDOMAIN\user group")) { if (adUser.IsMemberOf(adGroup)) { Console.WriteLine("User is a member!"); } else { Console.WriteLine("User is NOT a member."); } } What varies in my SharePoint environment that might prohibit this function from executing?

    Read the article

  • MSBuild Community Tasks can't see msbuild in cmd

    - by phenevo
    Hi, I have winforms project app.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="MyClient.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </sectionGroup> </configSections> <applicationSettings> <MyClient.Properties.Settings> <setting name="MyClient_MyService_MyService" serializeAs="String"> <value>SomeUniqueKeyWithAGoodName/server/myService.asmx</value> </setting> </MyClient.Properties.Settings> </applicationSettings> </configuration> customized.targets: <Project ToolsVersion="3.5" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <BuildEnvironment>DEV</BuildEnvironment> </PropertyGroup> <Choose> <When Condition=" '$(BuildEnvironment)' == 'DEV' "> <PropertyGroup> <BaseUrlWebServices>http://tools.productionServer.pl</BaseUrlWebServices> <PublishDir>C:\Documents and Settings\myName\Desktop\Project\TestMsBuild\</PublishDir> </PropertyGroup> </When> <When Condition=" '$(BuildEnvironment)' == 'QA' "> <PropertyGroup> <BaseUrlWebServices>http://tools.testServer.pl</BaseUrlWebServices> <PublishDir>C:\Documents and Settings\myName\Desktop\Project\TestMsBuild2\</PublishDir> </PropertyGroup> </When> </Choose> </Project> and publishQA.bat (this file is in directory of project) @ECHO OFF msbuild /t:Publish /p:Configuration=Release /p:BuildEnvironment=QA /p:ApplicationVersion=1.2.3.5 pause When I'm running this bat I get error in cmd: @@echo is not recognised... When I'm starting project it's ok, but when I'm lauch try to use any method from webservice I got error about wrong URI. Good uri for QA is : http://tools.testServer.pl/server/myService.asmx Any ideas ?

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • trying to build Boost MPI, but the lib files are not created. What's going on?

    - by unknownthreat
    I am trying to run a program with Boost MPI, but the thing is I don't have the .lib. So I try to create one by following the instruction at http://www.boost.org/doc/libs/1_43_0/doc/html/mpi/getting_started.html#mpi.config The instruction says "For many users using LAM/MPI, MPICH, or OpenMPI, configuration is almost automatic", I got myself OpenMPI in C:\, but I didn't do anything more with it. Do we need to do anything with it? Beside that, another statement from the instruction: "If you don't already have a file user-config.jam in your home directory, copy tools/build/v2/user-config.jam there." Well, I simply do what it says. I got myself "user-config.jam" in C:\boost_1_43_0 along with "using mpi ;" into the file. Next, this is what I've done: bjam --with-mpi C:\boost_1_43_0>bjam --with-mpi WARNING: No python installation configured and autoconfiguration failed. See http://www.boost.org/libs/python/doc/building.html for configuration instructions or pass --without-python to suppress this message and silently skip all Boost.Python targets Building the Boost C++ Libraries. warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. warning: Unable to construct ./stage-unversioned warning: Unable to construct ./stage-unversioned Component configuration: - date_time : not building - filesystem : not building - graph : not building - graph_parallel : not building - iostreams : not building - math : not building - mpi : building - program_options : not building - python : not building - random : not building - regex : not building - serialization : not building - signals : not building - system : not building - test : not building - thread : not building - wave : not building ...found 1 target... The Boost C++ Libraries were successfully built! The following directory should be added to compiler include paths: C:\boost_1_43_0 The following directory should be added to linker library paths: C:\boost_1_43_0\stage\lib C:\boost_1_43_0> I see that there are many libs in C:\boost_1_43_0\stage\lib, but I see no trace of libboost_mpi-vc100-mt-1_43.lib or libboost_mpi-vc100-mt-gd-1_43.lib at all. These are the libraries required for linking in mpi applications. What could possibly gone wrong when libraries are not being built?

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • How to get NHProf reports in TeamCity running MSBUILD

    - by Jon Erickson
    I'm trying to get NHProf reports on my integration tests as a report in TeamCity I'm not sure how to get this set up correctly and my first attempts are unsuccessful. Let me know if there is any more information that would be helpful... I'm getting the following error, when trying to generate html reports with MSBUILD (which is being run by TeamCity) error MSB3073: The command "C:\CI\Tools\NHProf\NHProf.exe /CmdLineMode /File:"E:\CI\BuildServer\RMS-Winform\Group\dev\NHProfOutput.html" /ReportFormat:Html" exited with code -532459699 I tell TeamCity to run MSBUILD w/ CIBuildWithNHProf target The command line parameters that I pass from TeamCity are... /property:NHProfExecutable=%system.NHProfExecutable%;NHProfFile=%system.teamcity.build.checkoutDir%\NHProfOutput.html;NHProfReportFormat=Html The portion of my MSBUILD script that runs my tests is as follows... <UsingTask TaskName="NUnitTeamCity" AssemblyFile="$(teamcity_dotnet_nunitlauncher_msbuild_task)"/> <!-- Set Properties --> <PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <Platform Condition=" '$(Platform)' == '' ">x86</Platform> <NHProfExecutable></NHProfExecutable> <NHProfFile></NHProfFile> <NHProfReportFormat></NHProfReportFormat> </PropertyGroup> <!-- Test Database --> <Target Name="DeployDatabase"> <!-- ... --> </Target> <!-- Database Used For Integration Tests --> <Target Name="DeployTestDatabase"> <!-- ... --> </Target> <!-- Build All Projects --> <Target Name="BuildProjects"> <MSBuild Projects="..\MySolutionFile.sln" Targets="Build"/> </Target> <!-- Stop NHProf --> <Target Name="NHProfStop"> <Exec Command="$(NHProfExecutable) /Shutdown" /> </Target> <!-- Run Unit/Integration Tests --> <Target Name="RunTests"> <CreateItem Include="..\**\bin\debug\*Tests.dll"> <Output TaskParameter="Include" ItemName="TestAssemblies" /> </CreateItem> <NUnitTeamCity Assemblies="@(TestAssemblies)" NUnitVersion="NUnit-2.5.3"/> </Target> <!-- Start NHProf --> <Target Name="NHProfStart"> <Exec Command="$(NHProfExecutable) /CmdLineMode /File:&quot;$(NHProfFile)&quot; /ReportFormat:$(NHProfReportFormat)" /> </Target> <Target Name="CIBuildWithNHProf" DependsOnTargets="BuildProjects;DeployTestDatabase;NHProfStart;RunTests;NHProfStop;DeployDatabase"> </Target>

    Read the article

  • What's the use of Ant's extension-point if/unless attributes?

    - by Robert Menteer
    When you define an extension-point in an Ant build file you can have it conditional by using the if or unless attribute. On a target the if/unless prevent it's tasks from being run. But an extension-point doesn't have any tasks to conditionally run, so what does the condition do? My thought (which proved to be incorrect in Ant 1.8.0) is it would prevent any tasks that extend the extension-point from being run. Here is an example build script showing the problem: <project name = "ext-test" default = "main"> <property name = "do.it" value = "false" /> <extension-point name = "init"/> <extension-point name = "doit" depends = "init" if = "${do.it}" /> <target name = "extend-init" extensionOf = "init"> <echo message = "Doing extend-init." /> </target> <target name = "extend-doit" extensionOf = "doit"> <echo message = "Do It! (${do.it})" /> </target> <target name = "main" depends = "doit"> <echo message = "Doing main." /> </target> </project> Using the command: ant -v Relults in: Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: /Users/bob/build.xml Detected Java version: 1.6 in: /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home Detected OS: Mac OS X parsing buildfile /Users/bob/build.xml with URI = file:/Users/bob/build.xml Project base dir set to: /Users/bob parsing buildfile jar:file:/Users/bob/Documents/Development/3P-Tools/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/Users/bob/Documents/Development/3P-Tools/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Build sequence for target(s) `main' is [extend-init, init, extend-doit, doit, main] Complete build sequence is [extend-init, init, extend-doit, doit, main, ] extend-init: [echo] Doing extend-init. init: extend-doit: [echo] Do It! (false) doit: Skipped because property 'false' not set. main: [echo] Doing main. BUILD SUCCESSFUL Total time: 0 seconds You will notice the target extend-doit is executed but the extention-point itself is skipped. Since an extention-point doesn't have any tasks exactly what has been skipped? Any targets that depend on the extention-point still get executed since a skipped target is a successful target. What is the value of the if/unless attributes on an extention-point?

    Read the article

  • Using Effect For Fog of War

    - by Qua
    I'm trying to apply fog of war to areas on the screen not currently visible to the player. I do this by rendering the game content in one RenderTarget and the the fog of war into another, and then I merge them with an effect file that takes the color from the game RenderTarget and the alpha from the fog of war render target. The FOW RenderTarget is black where the FOW appears, and white where it doesn't. This does work, but it colors the fog of war (the unrevealed locations) white instead of the intended color of black. Before applying the effect I clear the backbuffer of the device to white. When I try to clear it to black, non of the fog of war appears at all, which I assume is a product of alpha blending with black. It works for all other colors, however - giving the resulting screen a tint of that color. How do I archieve a black fog while still being able to do alpha blending between the two render targets? The rendering code for applying the FOW: private RenderTarget2D mainTarget; private RenderTarget2D lightTarget; private void CombineRenderTargetsAndDraw() { batch.GraphicsDevice.SetRenderTarget(null); batch.GraphicsDevice.Clear(Color.White); fogOfWar.Parameters["LightsTexture"].SetValue(lightTarget); batch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend); fogOfWar.CurrentTechnique.Passes[0].Apply(); batch.Draw( mainTarget, new Rectangle(0, 0, batch.GraphicsDevice.PresentationParameters.BackBufferWidth, batch.GraphicsDevice.PresentationParameters.BackBufferHeight), Color.White ); batch.End(); } The effect file I'm using to apply the FOW: texture LightsTexture; sampler ColorSampler : register(s0); sampler LightsSampler = sampler_state{ Texture = <LightsTexture>; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; }; float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 tex = input.TexCoord; float4 color = tex2D(ColorSampler, tex); float4 alpha = tex2D(LightsSampler, tex); return float4(color.r, color.g, color.b, alpha.r); } technique Technique1 { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • MSBuild: Items + Batching + CreateItem + Transforms Question

    - by KeithCS
    I have this bit of an msbuild project that is making me wonder why it the outcome is the way it is. Not that it is causing an issue or anything of the sort, but I would like to try and better my understanding of it. <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="TestTarget1;TestTarget2" ToolsVersion="3.5"> <ItemGroup> <PathDir Include="C:\RootDir\UniqueDir1"/> <PathDir Include="C:\RootDir\UniqueDir2" /> </ItemGroup> <Target Name="TestTarget1" Outputs="%(PathDir.Identity)"> <PropertyGroup> <RootPath>%(PathDir.Identity)</RootPath> </PropertyGroup> <ItemGroup> <SubDirectory Include="Common1"/> <SubDirectory Include="Common2"/> </ItemGroup> <CreateItem Include="@(SubDirectory->'$(RootPath)\%(Identity)')"> <Output TaskParameter="Include" ItemName="FullPath"/> </CreateItem> <Message Text="@(FullPath)"/> </Target> <Target Name="TestTarget2"> <Message Text="@(FullPath)"/> </Target> </Project> So I have two main paths that are unique, and within each I have two directories with the same names in each of the unique paths. In target1, I am batching against the identity of the items in PathDir, and then performing a transform on item SubDirectory, which contains the common folder names found in the unique directories, to create a new item containing the full paths. So anyways, after that, the output for the targets is as follows: Target 1: C:\RootDir\UniqueDir1\Common1;C:\RootDir\UniqueDir1\Common2 C:\RootDir\UniqueDir2\Common1;C:\RootDir\UniqueDir2\Common2 Target 2: C:\RootDir\UniqueDir1\Common1;C:\RootDir\UniqueDir1\Common2;C:\RootDir\UniqueDir2\Common1;C:\RootDir\UniqueDir2\Common2 So my question I guess is ... why does target1 only display the directories containing the directory it is batching against? I know it probably has to do with batching, but thats all I know.

    Read the article

  • How to do an fetch request with expressions like this on the iPhone?

    - by dontWatchMyProfile
    The documentation has an example on how to retrieve simple values only, rather than managed objects. This remembers a lot SQL using aliases and functions to only retrieve calculated values. So, actually pretty geeky stuff. To get the minimum date from a bunch of records, this is used on the mac: NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:context]; [request setEntity:entity]; // Specify that the request should return dictionaries. [request setResultType:NSDictionaryResultType]; // Create an expression for the key path. NSExpression *keyPathExpression = [NSExpression expressionForKeyPath:@"creationDate"]; // Create an expression to represent the minimum value at the key path 'creationDate' NSExpression *minExpression = [NSExpression expressionForFunction:@"min:" arguments:[NSArray arrayWithObject:keyPathExpression]]; // Create an expression description using the minExpression and returning a date. NSExpressionDescription *expressionDescription = [[NSExpressionDescription alloc] init]; // The name is the key that will be used in the dictionary for the return value. [expressionDescription setName:@"minDate"]; [expressionDescription setExpression:minExpression]; [expressionDescription setExpressionResultType:NSDateAttributeType]; // Set the request's properties to fetch just the property represented by the expressions. [request setPropertiesToFetch:[NSArray arrayWithObject:expressionDescription]]; // Execute the fetch. NSError *error; NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error]; if (objects == nil) { // Handle the error. } else { if ([objects count] > 0) { NSLog(@"Minimum date: %@", [[objects objectAtIndex:0] valueForKey:@"minDate"]; } } [expressionDescription release]; [request release]; Nice, I though - but having a deep look into NSExpression -expressionForFunction:arguments: it turns out that iPhone OS does NOT support the min: function. Well, probably there's a nifty way to use an own function for this kind of stuff on the iPhone as well? Because on thing I'm already worrying about is, how I'm gonna sort a table based on the calculated distance of targets on a map (location-based stuff).

    Read the article

  • Using target-specific variable in makefile

    - by James Johnston
    I have the following makefile: OUTPUTDIR = build all: v12target v13target v12target: INTDIR = v12 v12target: DoV12.avrcommontargets v13target: INTDIR = v13 v13target: DoV13.avrcommontargets %.avrcommontargets: $(OUTPUTDIR)/%.elf @true $(OUTPUTDIR)/%.elf: $(OUTPUTDIR)/$(INTDIR)/main.o @echo TODO build ELF file from object file: destination $@, source $^ @echo Compiled elf file for $(INTDIR) > $@ $(OUTPUTDIR)/$(INTDIR)/%.o: %.c @echo TODO call GCC to compile C file: destination $@, source $< @echo Compiled object file for $<, revision $(INTDIR) > $@ $(shell rm -rf $(OUTPUTDIR)) $(shell mkdir -p $(OUTPUTDIR)/v12 2> /dev/null) $(shell mkdir -p $(OUTPUTDIR)/v13 2> /dev/null) .SECONDARY: The idea is that there are several different code configurations that need to be compiled from the same source code. The "all" target depends on v12target and v13 target, which set a number of variables for that particular build. It also depends on an "avrcommontargets" pattern, which defines how to actually do the compiling. avrcommontargets then depends on the ELF file, which in turn depends on object files, which are built from the C source code. Each compiled C file results in an object file (*.o). Since each configuration (v12, v13, etc.) results in a different output, the C file needs to be built several times with the output placed in different subdirectories. For example, "build/v12/main.o", "build/v13/main.o", etc. Sample output: TODO call GCC to compile C file: destination build//main.o, source main.c TODO build ELF file from object file: destination build/DoV12.elf, source build//main.o TODO build ELF file from object file: destination build/DoV13.elf, source build//main.o The problem is that the object file isn't going into the correct subdirectory. For example, "build//main.o" instead of "build/v12/main.o". That then prevents the main.o from being correctly rebuilt to generate the v13 version of main.o. I'm guessing the issue is that $(INTDIR) is a target specific variable, and perhaps this can't be used in the pattern targets I defined for %.elf and %.o. The correct output would be: TODO call GCC to compile C file: destination build/v12/main.o, source main.c TODO build ELF file from object file: destination build/DoV12.elf, source build/v12/main.o TODO call GCC to compile C file: destination build/v13/main.o, source main.c TODO build ELF file from object file: destination build/DoV13.elf, source build/v13/main.o What do I need to do to adjust this makefile so that it generates the correct output?

    Read the article

  • Running same powershell script multiple asynchronous times with separate runspace

    - by teqnomad
    Hi, I have a powershell script which is called by a batch script which is called by Trap Receiver (which also passes environment variables) (running on windows 2008). The traps are flushed out at times in sets of 2-4 trap events, and the batch script will echo the trap details for each message to a logfile, but the powershell script on the next line of the batch script will only appear to process the first trap message (the powershell script writes to the same logfile). My interpetation is that the defaultrunspace is common to all iterations of the script running and this is why the others appear to be ignored. I've tried adding "-sta" when I invoke the powershell script using "powershell.exe -command", but this didn't help. I've researched and found a method using C# but I don't know this language, and busy enough learning powershell, so hoping to find a more direct solution especially as interleaving a "wrapper" between batch and powershell will involve passing the environment variables. http://www.codeproject.com/KB/threads/AsyncPowerShell.aspx I've hunted through stackoverflow, and again the only question of similar vein was using C#. Any suggestions welcome. Some script background: The powershell script is actually a modification of a great script found at gregorystrike website - cant post the link as I'm limited to one link but its the one for Lefthand arrays. Lots of mods so it can do multiple targets from one .ini file, taking in the environment variables, and options to run portions of the script interactively with winform. But you can see the gist of the original script. The batch script is pretty basic. The keys things are I'm trying to filter out trap noise using the :~ operator, and I tried -sta option to see if this would compartmentalise the powershell script. set debug=off set CMD_LINE_ARGS="%*" set LHIPAddress="%2" set VARBIND8="%8" shift shift shift shift shift shift shift set CHASSIS="%9" echo %DATE% %TIME% "Trap Received: %LHIPAddress% %CHASSIS% %VARBIND8%" >> C:\Logs\trap_out.txt set ACTION="%VARBIND8:~39,18%" echo %DATE% %TIME% "Action substring is %ACTION%" 2>&1 >> C:\Logs\trap_out.txt if %ACTION%=="Remote Copy Volume" ( echo Prepostlefthand_env_v2.9 >> C:\Logs\trap_out.txt c:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.exe -sta -executionpolicy unrestricted -command " & 'C:\Scripts\prepostlefthand_env_v2.9.ps1' Backupsettings.ini ALL" 2>&1 >> C:\Logs\trap_out.txt ) ELSE ( echo %DATE% %TIME% Action substring is %ACTION% so exiting" 2>&1 >> C:\Logs\trap.out.txt ) exit

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Custom activity designers in Workflow Foundation 3.5: How do they work?

    - by stakx
    Intent of this post: I realise that Workflow Foundation is not extremely popular on StackOverflow and that there will probably be not many answers, or none at all. This post is intended as a resource to people trying to customise workflow activities' appearance through custom designer classes. Goals: I am attempting to create a custom designer class for Workflow activities to achieve the following: Make activities look less technical. For example, I don't necessarily want to see the internal object name as the activity's "title" -- instead, I'd like to see something more descriptive. Display the values of certain properties beneath the title text. I would like to see some properties' values directly underneath the title so that I don't need to look somewhere else (namely, at the Properties window). Provide custom drop areas and draw custom internal arrows. As an example, I would like to be able to have custom drop areas in very specific places. What I found out so far: I created a custom designer class deriving from SequentialActivityDesigner as follows: [Designer(typeof(SomeDesigner))] public partial class SomeActivity: CompositeActivity { ... } class PlainDesigner : SequentialActivityDesigner { ... } Through overriding some properties and the OnPaint method, I found out about the following correspondences between the properties and how the activity will be displayed: Figure 1. Relationship between some properties of an SequentialActivityDesigner and the displayed activity. Possible solutions for goal #1 (make activities look less technical) and goal #2 (display values of properties beneath title text): The displayed title can be changed through the Title property. If more room is required to display additional information beneath the title, the TitleHeight property can be increased (ie., override the property and make it return base.TitleHeight + n, where n is some positive integer). Override the OnPaint method and draw additional text in the area reserved through TitleHeight. Open questions: What are the connectors, connections, and connection points used for? They seem to be necessary, but for what purpose? While the drop targets can be got through the GetDropTargets method, it seems that this is not necessarily where the designer will actually place dropped activities. When an activity is dragged across a workflow, the designer displays little green plus signs where activities can be dropped; how does it figure out the locations of these plus signs? How does the designer figure out where to draw connector lines and arrows?

    Read the article

  • Collision point of 2 curves in a 3d-room

    - by Frank
    Hello, i am programming a small game for quite some time. We started coding a small FPS-Shooter inside of a project at school to get a bit experience using directX. I dont know why, but i couldnt stop the project and started programming at home aswell. At the moment i am trying to create some small AI. Of cause thats definatlly not easy, but thats my personal goal anyways. The topic could prolly fill multiple books hehe. I've got the walking part of my bots done so far. They walk along a scriped path. I am not working on the "aiming" of the bots. While programming that i hit on some math problem i couldnt solve yet. I hope of your input on this to help me get further. Concepts, ideas and everything else are highly appreciated. Problem: Calculate the position (D3DXVECTOR3) where the curve of the projectile (depends on gravity, speed), hit the curved of the enemys walking path (depends on speed). We assume that the enemy walks in a constant line. Known variables: float projectilSpeed = 2000 m/s //speed of the projectile per second float gravitation = 9.81 m/s^2 //of cause the gravity lol D3DXVECTOR3 targetPosition //position of the target stored in a vector (x,y,z) D3DXVECTOR3 projectilePosition //position of the projectile D3DXVECTOR3 targetSpeed //stores the change of the targets position in the last second Variabledefinition ProjectilePosition at time of collision = ProjectilePos_t TargetPosition at time of collision = TargetPos_t ProjectilePosition at time 0, now = ProjectilePos_0 TargetPosition at time 0, now = TargetPos_0 Time to impact = t Aim-angle = theta My try: Found a formular to calculate "drop" (Drop of the projectile based on the gravity) on Wikipedia: float drop = 0.5f * gravity * t * t The speed of the projectile has a horizontal and a vertical part.. Found a formular for that on wikipedia aswell: ProjectilVelocity.x = projectilSpeed * cos(theta) ProjectilVelocity.y = projectilSpeed * sin(theta) So i would assume this is true for the projectile curve: ProjectilePos_t.x = ProjectilePos_0.x + ProjectileSpeed * t ProjectilePos_t.y = ProjectilePos_0.y + ProjectileSpeed * t + 0.5f * gravity * t * t ProjectilePos_t.z = ProjectilePos_0.z + ProjectileSpeed * t The target walk with a constant speed, so we can determine his curve by this: TargetPos_t = TargetPos_0 + TargetSpeed * D3DXVECTOR3(t, t, t) Now i dont know how to continue. I have to solve it somehow to get a hold on the time to impact somehow. As a basic formular i could use: float time = distanz / projectileSpeed But that wouldnt be truly correct as it would assume a linear "Trajectory". We just find this behaivor when using a rocket. I hope i was able to explain the problem as much as possible. If there are questions left, feel free to ask me! Greets from germany, Frank

    Read the article

  • IPHONE DEVELOPMENT PROFILE EXPIRED - I TRIED EVERYTHING AND YES, I READ THE DOCS

    - by theiphoneguy
    I really combed this site and others. I read and re-read the related links here and the Apple docs. I'm sorry, but either I am obviously missing something right under my nose, or this Apple profile/certificate stuff is a bit convoluted. Here it is: I have a product in the App Store. I have updated it several times and users like it. My development profile recently expired just when I was improving the app for its next release. I can run the app in the simulator. I can compile and put the distribution build on my iPhone just fine. I went to the Apple portal and renewed the development profile. I downloaded it and installed it in Xcode. I see it in the Organize window. I see it on my iPhone. I CANNOT put the debug build on my iPhone to debug or run with Instruments. The message is that either there is not a valid signed profile or it is untrusted. I subsequently tried to download and install the certificate to my Mac's keychain. Still no success. I checked the code signing section of Project settings and also for the target and the root. All appears to indicate that it is using the expected development profile for debug. Yes, I had deleted the old profile from my iPhone, from the Organizer. I cleaned the Xcode cache and all targets. I have done all of this several times and in varying sequences to try to cover every possibility. I am ready to do anything to be able to debug with Instruments in order to check for leaks or high memory usage. Even though the distribution compile runs fine on my iPhone and plays well with other running processes, I will not release anything without a leaks/memory test. Any ideas will be appreciated. If I missed something obvious, please forgive me - it was not due to just posting a question without searching for similar postings. Thanks!

    Read the article

  • How can I have a Makefile automatically rebuild source files that include a modified header file? (I

    - by Nicholas Flynt
    I have the following makefile that I use to build a program (a kernel, actually) that I'm working on. Its from scratch and I'm learning about the process, so its not perfect, but I think its powerful enough at this point for my level of experience writing makefiles. AS = nasm CC = gcc LD = ld TARGET = core BUILD = build SOURCES = source INCLUDE = include ASM = assembly VPATH = $(SOURCES) CFLAGS = -Wall -O -fstrength-reduce -fomit-frame-pointer -finline-functions \ -nostdinc -fno-builtin -I $(INCLUDE) ASFLAGS = -f elf #CFILES = core.c consoleio.c system.c CFILES = $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c))) SFILES = assembly/start.asm SOBJS = $(SFILES:.asm=.o) COBJS = $(CFILES:.c=.o) OBJS = $(SOBJS) $(COBJS) build : $(TARGET).img $(TARGET).img : $(TARGET).elf c:/python26/python.exe concat.py stage1 stage2 pad.bin core.elf floppy.img $(TARGET).elf : $(OBJS) $(LD) -T link.ld -o $@ $^ $(SOBJS) : $(SFILES) $(AS) $(ASFLAGS) $< -o $@ %.o: %.c @echo Compiling $<... $(CC) $(CFLAGS) -c -o $@ $< #Clean Script - Should clear out all .o files everywhere and all that. clean: -del *.img -del *.o -del assembly\*.o -del core.elf My main issue with this makefile is that when I modify a header file that one or more C files include, the C files aren't rebuilt. I can fix this quite easily by having all of my header files be dependencies for all of my C files, but that would effectively cause a complete rebuild of the project any time I changed/added a header file, which would not be very graceful. What I want is for only the C files that include the header file I change to be rebuilt, and for the entire project to be linked again. I can do the linking by causing all header files to be dependencies of the target, but I cannot figure out how to make the C files be invalidated when their included header files are newer. I've heard that GCC has some commands to make this possible (so the makefile can somehow figure out which files need to be rebuilt) but I can't for the life of me find an actual implementation example to look at. Can someone post a solution that will enable this behavior in a makefile? EDIT: I should clarify, I'm familiar with the concept of putting the individual targets in and having each target.o require the header files. That requires me to be editing the makefile every time I include a header file somewhere, which is a bit of a pain. I'm looking for a solution that can derive the header file dependencies on its own, which I'm fairly certain I've seen in other projects.

    Read the article

  • Windows splash screen using GDI+

    - by Luther
    The eventual aim of this is to have a splash screen in windows that uses transparency but that's not what I'm stuck on at the moment. In order to create a transparent window, I'm first trying to composite the splash screen and text on an off screen buffer using GDI+. At the moment I'm just trying to composite the buffer and display it in response to a 'WM_PAINT' message. This isn't working out at the moment; all I see is a black window. I imagine I've misunderstood something with regards to setting up render targets in GDI+ and then rendering them (I'm trying to render the screen using straight forward GDI blit) Anyway, here's the code so far: //my window initialisation code void MyWindow::create_hwnd(HINSTANCE instance, const SIZE &dim) { DWORD ex_style = WS_EX_LAYERED ; //eventually I'll be making use of this layerd flag m_hwnd = CreateWindowEx( ex_style, szFloatingWindowClass , L"", WS_POPUP , 0, 0, dim.cx, dim.cy, null, null, instance, null); SetWindowLongPtr(m_hwnd ,0, (__int3264)(LONG_PTR)this); m_display_dc = GetDC(NULL); //This was sanity check test code - just loading a standard HBITMAP and displaying it in WM_PAINT. It worked fine //HANDLE handle= LoadImage(NULL , L"c:\\test_image2.bmp", IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE); m_gdip_offscreen_bm = new Gdiplus::Bitmap(dim.cx, dim.cy); m_gdi_dc = Gdiplus::Graphics::FromImage(m_gdip_offscreen_bm);//new Gdiplus::Graphics(m_splash_dc );//window_dc ;m_splash_dc //this draws the conents of my splash screen - this works if I create a GDI+ context for the window, rather than for an offscreen bitmap. //For all I know, it might actually be working but when I try to display the contents on screen, it shows a black image draw_all(); //this is just to show that drawing something simple on the offscreen bit map seems to have no effect Gdiplus::Pen pen(Gdiplus::Color(255, 0, 0, 255)); m_gdi_dc->DrawLine(&pen, 0,0,100,100); DWORD last_error = GetLastError(); //returns '0' at this stage } And here's the snipit that handles the WM_PAINT message: ---8<----------------------- //Paint message snippit case WM_PAINT: { BITMAP bm; PAINTSTRUCT ps; HDC hdc = BeginPaint(vg->m_hwnd, &ps); //get the HWNDs DC HDC hdcMem = vg->m_gdi_dc->GetHDC(); //get the HDC from our offscreen GDI+ object unsigned int width = vg->m_gdip_offscreen_bm->GetWidth(); //width and height seem fine at this point unsigned int height = vg->m_gdip_offscreen_bm->GetHeight(); BitBlt(hdc, 0, 0, width, height, hdcMem, 0, 0, SRCCOPY); //this blits a black rectangle DWORD last_error = GetLastError(); //this was '0' vg->m_gdi_dc->ReleaseHDC(hdcMem); EndPaint(vg->m_hwnd, &ps); //end paint return 1; } ---8<----------------------- My apologies for the long post. Does anybody know what I'm not quite understanding regarding how you write to an offscreen buffer using GDI+ (or GDI for that matter)and then display this on screen? Thank you for reading.

    Read the article

  • IPhone Development Profile Expired

    - by theiphoneguy
    I really combed this site and others. I read and re-read the related links here and the Apple docs. I'm sorry, but either I am obviously missing something right under my nose, or this Apple profile/certificate stuff is a bit convoluted. Here it is: I have a product in the App Store. I have updated it several times and users like it. My development profile recently expired just when I was improving the app for its next release. I can run the app in the simulator. I can compile and put the distribution build on my iPhone just fine. I went to the Apple portal and renewed the development profile. I downloaded it and installed it in Xcode. I see it in the Organize window. I see it on my iPhone. I CANNOT put the debug build on my iPhone to debug or run with Instruments. The message is that either there is not a valid signed profile or it is untrusted. I subsequently tried to download and install the certificate to my Mac's keychain. Still no success. I checked the code signing section of Project settings and also for the target and the root. All appears to indicate that it is using the expected development profile for debug. Yes, I had deleted the old profile from my iPhone, from the Organizer. I cleaned the Xcode cache and all targets. I have done all of this several times and in varying sequences to try to cover every possibility. I am ready to do anything to be able to debug with Instruments in order to check for leaks or high memory usage. Even though the distribution compile runs fine on my iPhone and plays well with other running processes, I will not release anything without a leaks/memory test. Any ideas will be appreciated. If I missed something obvious, please forgive me - it was not due to just posting a question without searching for similar postings. Thanks!

    Read the article

  • 1180: Call to a possibly undefined method addEventListener

    - by Chris
    I'm going through some AS3 training, but I'm getting a weird error... I'm trying to add an event listener to the end of a motion tween in AS. I've created a tween, highlighted the frames, right clicked and copied the tween as AS and pasted it into the movie clip (I think there's a better way to do this, but I'm not sure what it is...) When I try to add the listener to the end of that code, I get the error. Here's my code. import fl.motion.AnimatorFactory; import fl.motion.MotionBase; import fl.motion.Motion; import flash.filters.*; import flash.geom.Point; import fl.motion.MotionEvent; import fl.events.*; var __motion_Enemy_3:MotionBase; if(__motion_Enemy_3 == null) { __motion_Enemy_3 = new Motion(); __motion_Enemy_3.duration = 30; // Call overrideTargetTransform to prevent the scale, skew, // or rotation values from being made relative to the target // object's original transform. // __motion_Enemy_3.overrideTargetTransform(); // The following calls to addPropertyArray assign data values // for each tweened property. There is one value in the Array // for every frame in the tween, or fewer if the last value // remains the same for the rest of the frames. __motion_Enemy_3.addPropertyArray("x", [0]); __motion_Enemy_3.addPropertyArray("y", [0]); __motion_Enemy_3.addPropertyArray("scaleX", [1.000000,1.048712,1.097424,1.146136,1.194847,1.243559,1.292271,1.340983,1.389695,1.438407,1.487118,1.535830,1.584542,1.633254,1.681966,1.730678,1.779389,1.828101,1.876813,1.925525,1.974237,2.022949,2.071661,2.120372,2.169084,2.217796,2.266508,2.315220,2.363932,2.412643]); __motion_Enemy_3.addPropertyArray("scaleY", [1.000000,1.048712,1.097424,1.146136,1.194847,1.243559,1.292271,1.340983,1.389695,1.438407,1.487118,1.535830,1.584542,1.633254,1.681966,1.730678,1.779389,1.828101,1.876813,1.925525,1.974237,2.022949,2.071661,2.120372,2.169084,2.217796,2.266508,2.315220,2.363932,2.412643]); __motion_Enemy_3.addPropertyArray("skewX", [0]); __motion_Enemy_3.addPropertyArray("skewY", [0]); __motion_Enemy_3.addPropertyArray("rotationConcat", [0]); __motion_Enemy_3.addPropertyArray("blendMode", ["normal"]); __motion_Enemy_3.addPropertyArray("cacheAsBitmap", [false]); __motion_Enemy_3.addEventListener(MotionEvent.MOTION_END, hurtPlayer); // Create an AnimatorFactory instance, which will manage // targets for its corresponding Motion. var __animFactory_Enemy_3:AnimatorFactory = new AnimatorFactory(__motion_Enemy_3); __animFactory_Enemy_3.transformationPoint = new Point(0.499558, 0.500000); // Call the addTarget function on the AnimatorFactory // instance to target a DisplayObject with this Motion. // The second parameter is the number of times the animation // will play - the default value of 0 means it will loop. // __animFactory_Enemy_3.addTarget(<instance name goes here>, 0); } function hurtPlayer(event:MotionEvent):void { this.parent.removeChild(this); } I've tried a few places for it, both with the animFactory_Enemy_3 variable and the motion_Enemy_3 variable - getting the same error both times.

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37  | Next Page >