Search Results

Search found 71854 results on 2875 pages for 'build time'.

Page 300/2875 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • How can I use the same buildscript for Flash Builder 4 and Ant/Mvn?

    - by David Laing
    I'm setting up a build system for a Flash Builder 4 (Flex 4) based project; and I'm struggling to get a setup that compiles in the IDE the same as it does from the command line on the build server. I come from a C# background; and my expectation is that I'll be able to create a "solution" with a collection of "projects" that I can compile from the IDE, or from the command line on the build server. The best I've managed sofar is 2 separate build "scripts", a custom ant script for the build server, and the default Flash Builder IDE config based on a workspace; but this is making my DRY daemons jump around in fury. Please can someone point me in the right direction :)

    Read the article

  • msbuild for .NET 3.5 issue with csla and System.Linq

    - by Sash
    This is a weird problem. I am trying to build a .NET 3.5 solution with msbuild. I generally write custom build scripts for this, and when I tried this time to build a simple .NET assembly which internally uses CSLA, it started giving me Linq errors. However, if I build the proj file via msbuild (command line), it seems to build just fine. No issues at all. Anyone else encounter this issue...and if yes, how do i fix this? Thanks, Sashidhar Kokku

    Read the article

  • With Maven, how would I prevent Maven from filtering certain properties but allowing others?

    - by Benny
    The problem is that I'm trying to build a project that has in its resources a build.xml file. Basically, I package my project as a jar with Maven2, and then use ant installer to install my project. There is a property in the build.xml file that I need to filter called build.date, but there are other properties that I don't want to filter, like ${basedir}, because it's used by the ant installer but gets replaced by Maven's basedir variable. So, I need to somehow tell Maven to filter ${build.date}, but not ${basedir}. I tried creating a properties file as a filter with "basedir=${basedir}" as one of the properties, but I get the following error: Resolving expression: '${basedir}': Detected the following recursive expression cycle: [basedir] Any suggestions would be much appreciated. Thanks, B.J.

    Read the article

  • NAnt IF task doesn't seem to work

    - by goombaloon
    I'm trying the example from the NAnt documentation for the if task at: http://nant.sourceforge.net/release/0.85/help/tasks/if.html Specifically the following code... <if test="${build.configuration='release'}"> <echo>Build release configuration</echo> </if> where build.configuration has been defined beforehand as <property name="build.configuration" value="debug" overwrite="false" /> When I run it using nant.exe (version 0.91.3881.0), I get the following error: '}' expected Expression: ${build.configuration='release'} ^ I'm guessing I'm missing something simple?

    Read the article

  • Should Factories Persist Entities?

    - by mxmissile
    Should factories persist entities they build? Or is that the job of the caller? Pseudo Example Incoming: public class OrderFactory { public Order Build() { var order = new Order(); .... return order; } } public class OrderController : Controller { public OrderController(IRepository repository) { this.repository = repository; } public ActionResult MyAction() { var order = factory.Build(); repository.Insert(order); ... } } or public class OrderFactory { public OrderFactory(IRepository repository) { this.repository = repository; } public Order Build() { var order = new Order(); ... repository.Insert(order); return order; } } public class OrderController : Controller { public ActionResult MyAction() { var order = factory.Build(); ... } } Is there a recommended practice here?

    Read the article

  • TFS: Managing assembly version number?

    - by TomTom
    Hello, any good approach for managing assembly version numbers in TFS, possibly together with using the same number for the build number? I would be most interested in an approach that: Maintains the first three elements of the version Counts the rest upward for every "official" build (i.e. a build originating from certain templates only - no need to count up for something like a gated checkin, but the following regular integration build SHOULD count up. Labels the builds, so that a manual "release" build can be triggered. Any solution? How are other people handling this? Right now the (new) TFS is happily building with the same assembly version all around ;) Something coding the complete assembly version with date etc. is not acceptable - I want that number to "follow rules", and having the date in there is not one of them ;)

    Read the article

  • Installing app on Devices(iPhone/iPod)

    - by Shakti
    Hi, I have developed an application for iPhone and iPod touch. Now i want to install that app on the device. I have created a provisioning profile for the particular device. My query is that, now to install the app on a device I have to add the provisioning profile of that device to xcode and after selecting the developer in build option in project info pane i build the project and send the build and provisioning profile to the user. Is there any to install the app without doing the above task of building the project with the provisioning profile. Means i have a build of an app and a newly downloaded provisioning profile. I will send the user both build and profile in a zip file. will the user be able to install the app on his/her device?

    Read the article

  • Python: mysqldb install error

    - by Grenko
    So i've been pulling my hair out trying to install the mysqldb package. When i run the build i get a long transcript of errors, heres just part of it, i would posit it all but its huge list of errors [rv@med240-183 MySQL-python-1.2.3c1]$ sudo python setup.py build [sudo] password for rv: running build running build_py copying MySQLdb/release.py -> build/lib.linux-i686-2.6/MySQLdb running build_ext building '_mysql' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i586 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -Dversion_info=(1,2,3,'gamma',1) -D__version__=1.2.3c1 -I/usr/include/mysql -I/usr/include/python2.6 -c _mysql.c -o build/temp.linux-i686-2.6/_mysql.o -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -fasynchronous-unwind-tables -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv -fPIC -DUNIV_LINUX _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory Any ideas?

    Read the article

  • Facing a problem when i am making a setup of Windows Service.

    - by prateeksaluja20
    i am trying to make a setup of windows service.but when i was building the setup the output is like that.. ------ Build started: Project: TwitterService, Configuration: Debug Any CPU ------ TwitterService - C:\Users\Globus-n2\Desktop\LatestTweetMati\TwitterService\bin\Debug\TwitterService.exe ------ Starting pre-build validation for project 'Setup' ------ ------ Pre-build validation for project 'Setup' completed ------ ------ Build started: Project: Setup, Configuration: Debug ------ Building file 'C:\Users\Globus-n2\Desktop\LatestTweetMati\Setup\Debug\Setup.msi'... Packaging file 'TwitterService.exe.config'... Packaging file 'GlobusLib.dll'... Packaging file 'TwitterService.exe'... Packaging file 'GlobusTwitterLib.dll'... ========== Build: 1 succeeded or up-to-date, 1 failed, 0 skipped ========== setup is failed i am not getting any error.i have tried to make a new copy but still problem remain.i have tried to add those dll again but problem is not solved.can any one please help me to solve this problem.il really very thankful if any one try to solve this problem.

    Read the article

  • Running MSBuild script on development machine

    - by devdigital
    Hi, I have an MSBuild script which performs a lot of tasks, as it is run on our build server. I want the script to be run each time a developer builds from Visual Studio on their local development machine, so that a) the build process they are runnning locally is the same as that run by the build server so any problems in the build can be identified immediately by the developer b) many of the operations of the build script are run on local builds, for example running of unit tests, generation of code coverage reports etc How is this possible in Visual Studio (2008)? Note I am running a single solution product with multiple projects.

    Read the article

  • How can I replicate the functionality of the Flash Builder's release tool in ant?

    - by Chris R
    I want to build an ant script that does exactly the same compilation actions on a Flash Builder 4 (Gumbo) project as the Project->Export Release Build... menu item does. My ant-fu is reasonably strong, that's not the issue, but rather I'm not sure exactly what that entry is doing. Some details: I'll be using the 3.x SDK (say, 3.2 for the sake of specificity) to build this. I'll be building on a Mac, and I can happily use ant, make, or some weird shell script stuff if that's the way you roll. Any useful optimizations you can suggest will be welcome. The project contains a few assets, MXML and actionscript source, and a couple of .swcs that are built into the project (not RSL'd) Can someone provide an ant build.xml or makefle that they use to build a release .swf file from a similar Flex project?

    Read the article

  • Unable to open executable - xcode

    - by Filipe Mota
    I'm getting this error...any idea how to solve it? GenerateDSYMFile /Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app.dSYM /Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app/PBTest cd /Users/fmota/Documents/Developer/Protobuf/PBTest setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/usr/bin/dsymutil /Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app/PBTest -o /Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app.dSYM error: unable to open executable '/Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app/PBTest'

    Read the article

  • How to solve 'Badly constructed integrity constraints' in doctrine

    - by Carson
    When I execute the command './doctrine build-all-reload' It comes out the following output: build-all-reload - Are you sure you wish to drop your databases? (y/n) y build-all-reload - Successfully dropped database for connection named 'doctrine' build-all-reload - Generated models successfully from YAML schema build-all-reload - Successfully created database for connection named 'doctrine' Badly constructed integrity constraints. Cannot define constraint of different f ields in the same table. Here is the source code of Doctrine that outputs the error: here What causes the error? How can I debug where the error comes from?

    Read the article

  • How do I get MSBuild Task to generate XML Documents when building a solution?

    - by toba303
    I have a solution with lots of projects. Each project is configured to generate the XML documentation file when building in Debug-Mode (which is default). That works when I build in Visual Studio 2008. In my build script on my integration server I advise MSBuild to build the whole solution, but it won't generate the documentation files. What can I do? I already tried to explicitly give the Debug-Condition to the build process, but it makes no difference. <Target Name="BuilSolution"> <MSBuild Projects="C:\Path\To\MySolution.sln" targets="Build" Properties="SolutionConfigurationPlatforms='Debug|Any CPU'"/> </Target> There seem to be some ideas to solve this problem when building single projects, but I can't afford to do this, so I need a hint for doing it this way. Thanks in advance!

    Read the article

  • How are builds deployed into QA->Staging->Production for ASP.NET Web Applications?

    - by CodeToGlory
    Secondary questions are How do we best utilize SCM in the build process? How are code files labed and branched? Should we the .csproj and .sln files for build? How flexible are these when deploying to several environments? I know these are msbuild files. But as we add new files, this can become a bottlenect of updating and maintaining these .csproj files in SCM. How is rollback done in case of failed builds that QA missed testing etc,etc., Are there any good articles on the build process? This is more a question on the process and less on the choice of automated build tools. Please share your build process. I would like to get an end-to-end view from developers checking-in to Going Live.

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • "A copy of Firefox is already open. Only one copy of Firefox can be open at a time."

    - by Isaac Waller
    I cannot start Firefox on my Mac. It just says "A copy of Firefox is already open. Only one copy of Firefox can be open at a time." I have tried restarting the computer. Any fixes? You have suggested deleting the lock files in my profile, but, I don't have a profile. I was trying to fix the problem in question http://superuser.com/questions/3275/firefox-on-mac-slow-slow-slow by deleting my profile, so I deleted it, and this came up. So I cannot delete the lock files because they don't exist.

    Read the article

  • "A copy of Firefox is already open. Only one copy of Firefox can be open at a time."

    - by Isaac Waller
    I cannot start Firefox on my Mac. It just says "A copy of Firefox is already open. Only one copy of Firefox can be open at a time." I have tried restarting the computer. Any fixes? You have suggested deleting the lock files in my profile, but, I don't have a profile. I was trying to fix the problem in question http://superuser.com/questions/3275/firefox-on-mac-slow-slow-slow by deleting my profile, so I deleted it, and this came up. So I cannot delete the lock files because they don't exist.

    Read the article

  • linux "date -s" command not working to change date on a server

    - by nelaar
    date +%T --set="12:19:06" 12:19:06 date Mon Nov 26 12:37:32 SAST 2012 I have tried many different forms of this command but nothing seams to work. In changing the date on this computer server running as VM is not working. Our messages log show messages like thise ntpd[3496]: time correction of -1098 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time. Our server is now about 20 minutes out. It seams like our server has not been updating the time correctly for a few days. Nov 22 19:29:23 hostname ntpd[1818]: time reset -998.577519 s Nov 22 19:32:34 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:33:39 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 19:52:30 hostname ntpd[1818]: time reset -998.992426 s Nov 22 19:55:47 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:56:53 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:13:04 hostname ntpd[1818]: time reset -999.374412 s Nov 22 20:16:40 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:17:44 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:32:02 hostname ntpd[1818]: time reset -999.716832 s Nov 22 20:35:28 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:36:16 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:56:39 hostname ntpd[1818]: time correction of -1000 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time.

    Read the article

  • PHP's page generation time takes 0.01s. 1/0.01 = 100; however i'm having problems reaching that number of request per seconds. Why?

    - by cedivad
    On average, my PHP page generation time is 10ms. So i should be able to execute 100 requests one after the other one (using a single core on the server, since that php is not multithreaded). However, i'm having problems reaching 50 pages per seconds. As of now i do 25 on avg., with a medium load. The application is really light, it consist in a read (<5KB) from a pool of SSDs, some read queries solved by indexes. Where should i look to solve this bottleneck?

    Read the article

  • Try to fill the GAE datastore but the code consumes to much cpu time. How to optimize this?

    - by Neverland
    I try to get the list of images in Amazon EC2 inside the Google datastore. I want to realize this with a cron job inside the GAE. class AmazonEC2uswest(db.Model): ami = db.StringProperty(required=True) mani = db.StringProperty() typ = db.StringProperty() arch = db.StringProperty() state = db.StringProperty() owner = db.StringProperty() class CronAMIsAmazonUS_WEST(webapp.RequestHandler): def get(self): aws_access_key_id_admin = "<secret>" aws_secret_access_key_admin = "<secret>" conn_us_west = boto.ec2.connect_to_region('us-west-1', aws_access_key_id=aws_access_key_id_admin, aws_secret_access_key=aws_secret_access_key_admin, is_secure = False) liste_images_us_west = conn_us_west.get_all_images() laenge_liste_images_us_west = len(liste_images_us_west) for i in range(laenge_liste_images_us_west): datastore_uswest_AMIs = AmazonEC2uswest(ami=liste_images_us_west[i].id, mani=str(liste_images_us_west[i].location), typ=liste_images_us_west[i].type, arch=liste_images_us_west[i].architecture, state=liste_images_us_west[i].state, owner=liste_images_us_west[i].ownerId) datastore_uswest_AMIs.put() The problem: Getting the list with get_all_images() lasts only a few seconds. But writing the data to the Google datastore needs way too much CPU time. My IBM T42p (P4M with 2GHz) needs for that piece of code approx. 1 Minute! Is it possible to optimize my code in a way that it needs fewer CPU time?

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >