Search Results

Search found 5756 results on 231 pages for 'resource governor'.

Page 99/231 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • How to calculate production when player is offline

    - by Kaizer
    What is the best way to do for example food growth based on how many food buildings you have? Lets say I have a webbased game where you can build a farm wich generates 60 food units per hour. A player has 1 farm in his possession. What is the best way to keep on producing these units even when the player is offline? Should I do the math when the player get's back online again? If so..how can I do this without having to save his last online time every 5 seconds so I can do some maths with it when he logs back in (datetime.now - lastonlinetime)? Next thing is when the player is online, should I refresh his resource count every 5 seconds or so by going to the database and back? This would seem weird to do for every logged on player. I hope you understand my question. kind regards

    Read the article

  • Exadata, Exalogic and Exalytics Partner Demo Equipment Purchase Initiative

    - by Cinzia Mascanzoni
    Oracle is pleased to announce that as part of the Demo Equipment Purchase Program, until December 31, 2012, Oracle VADs may purchase Exadata, Exalogic and Exalytics configurations, for their own demonstration purposes or to distribute to a partner for the partner's demonstration use, at 25% off the list price. In addition, purchasing partners will be eligible to receive 10% of list price (excluding support) in MDF funds to support the partner's Exadata/Exalogic/Exalytics demand generation activities. New units must be used for demo purposes for a minimum of 6 months before they may be resold to an end user. For more info, visit the VAD Resource center here.

    Read the article

  • My Habitat For Humanity Day Before TechEd 2010 in New Orleans

    Yesterday, I spent the day building what I think are stantions (piles of blocks held together by mortar) to hold a house 5 feet above the ground.  Through twitter, I saw a bunch of people from... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • And just like that, there went February

    - by Enrique Lima
    The best intentions were having me at perhaps posting more than twice a month, and here we are and … well it has been busy. There has been a lot of SharePoint 2010 already in this first number of weeks of the year, a good amount of TFS 2010, but even better working quite actively on two topics that I enjoy quite a bit … Windows Azure and Application Lifecycle Management.  And there is more to come around those two. Through the influencers program with Geekswithblogs and Discountasp.net I got access to a hosted TFS solution that I am currently testing and will be posting my findings and documenting a good amount of information on that process. Another great resource has been fpweb.net, and there will be more details on this too.  Pretty exciting stuff! That is what is going on and what will be coming on here shortly.

    Read the article

  • What Do Your Customers Want in an Online Experience?

    - by Christie Flanagan
    In a time where customers have an increasing number of choices and an increasing level of control over their relationships with brands, what matters most is engagement. In order to engage your customers online, you need to provide them with a relevant, interactive and multichannel experience.  Check out this video to see the kind of engaging online experience that Oracle WebCenter can power for your customers. Want to learn more?  Visit our Connected Customer Experience Resource Center to: See a demonstration of how easy it is for marketers and other non–technical business users to create and manage online experiences like the one above with Oracle WebCenter Sites Hear Ancestry.com describe how they use Oracle WebCenter Sites to deliver an online experience that converts site visitors into customers and keeps them coming back to learn more about their family histories Hear what analysts are saying about the exciting new and enhanced web experience management capabilities in Oracle WebCenter Sites 11g

    Read the article

  • VDC Research Webcast: Engineering Business Value in the IoT with Java 8

    - by tangelucci
    Date: Thursday, June 19, 2014 Time: 9:30 AM PDT, 12:30 PM EDT, 17:30 GMT The growth of the Internet of Things (IoT) opens up new service-driven opportunities, delivering increased efficiencies, better customer value, and improved quality of life. Realizing the full potential of the Internet of Things requires that we change how we view and build devices. These next-generation systems provide the core foundation of the services, rapidly transforming data to information to value. From healthcare to building control systems to vehicle telematic systems, the IoT focuses on how conneted devices can become more intelligent, enhance interoperability with other devices, systems and services, and drive timely decisions while delivering real business return for all. Join this webcast to learn about: Driving both revenue opportunities and operational efficiencies for the IoT value chain Leveraging Java to make devices more secure How Java can help overcome resource gaps around intelligent connected devices Suggestions on how to better manage fragmentation in embedded devices Register here: http://event.on24.com/r.htm?e=793757&s=1&k=4EA8426D0D31C60A2EDB139635FF75AB

    Read the article

  • Is ROA a specific form of SOA?

    - by JohnDoDo
    I have read somewhere that ROA (Resource Oriented Architecture) is SOA (Service Oriented Architecture) with specific constraints added. SOA is the abstract term and that ROA is an implementation of SOA with all of the constraints of RESTful services (SOA = concept, ROA = concept + implementation details). I also had my share of posts saying that ROA is REST and that SOA is SOAP and going into the same more or less pertinent comparisons between the two (SOAP and REST that is) etc etc. So just to clear up my confusion: Is ROA a specific form of doing SOA?

    Read the article

  • How to structure "work packages" [closed]

    - by azerIO
    Could someone give me the information about how one structures the s.c. "work packages"? I have never done this before but that's my task now. I need to describe use-cases, perliminary definitions, example workflows of the application, goals, i/o of the application, requirements etc. Does someone have a sample doc. with "work packages" or a link to some corresponding resource in the net? Thx. PS. Initial term "(ger.) Arbeitspaket".

    Read the article

  • Great F# getting started online book

    - by MarkPearl
    So I have been battling around with F# for a few weeks and it has been frustrating just getting my brain around the syntax etc. Then someone put a comment on my blog that I should check out the following online book called the F# Survival Guide. I highly recommend those wanting to get into the basics of the language to go through this resource. It is easy to understand, especially for someone coming from a C# background. Give it a read… it gets a two thumbs up from me!

    Read the article

  • Cannot boot: "No init found. Try passing init=bootarg"

    - by glutz
    After losing power, my machine rebooted to this error: “error: no init found. Try passing init=bootarg” Per similar threads on this site and others, I've tried booting from a CD and selecting "Try Ubuntu". Then open a terminal and typing: sudo fsck-y /dev/sda1. The response is: Device or resource busy while trying to open /dev/sda1. Filesystem mounted or opened exclusively by another program? This is on Ubuntu 10.10. Any ideas on what I can try next?

    Read the article

  • Can not export JARS in lwjgl!

    - by NerdyLegend
    I did it before but for some reason it's doing the stupid problem again. I want to export as a regular Jar file, not a folder full of files. I export it like in Oskar Veerhoak. cmd says Exception in thread "main" java.lang.RuntimeException: Resource not found: res/F lubberFlap.png at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoa der.java:69) at com_FlubberSpace.MainFS.main(MainFS.java:118) I tested it out with my other project with the same code pretty much. This is how I load my Textures wood = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/wood.png")); of course it works fine in eclipse but not after export. I havn't tried giving the previous game its' own project, I have it in a package. Can someone record how to export properly? I want a jar file that you just double click to start it. I also want it so nobody can extract the files and see my classes and res.

    Read the article

  • After upgrade my webcam mic records fast, high pitched, and squeaky only in Skype (maybe Sound Recorder problem too)

    - by Dennis
    After an upgrade to 11.10 which probably also updated Skype to 2.2.35 (not sure because I never checked the version before) the sound that comes back from an echo test is very high pitched and squeaky. I'm not sure if when in a call if the other person can't hear or just doesn't know what they are hearing. I am using a USB Logitech C250 Audacity records fine, gmail video chat works fine, but if I start sound recorder I get a "Could not negotiate format", followed by "Could not get/set settings from/on resource". I don't know if this is a Skype problem or a wider Pulse problem. My only real needs are the gmail and Audacity, though I have a couple of contacts that I can only Skype with.

    Read the article

  • Universal Work Queue Quick Filter Examples

    - by LuciaC-Oracle
    If you use Universal Work Queue then it's likely that you will want to define and use your own Quick Filters.  Quick Filters allow you to focus on specific work classes based on assigned criteria in a node. This makes it much easier for Agents to view their work grouped in a meaningful way.How to create Universal Work Queue - Quick Filters (Doc ID 803163.1) gives two worked examples to help you understand how to create your own Quick Filters:     Adding a 'Resource Group' filter     Adding an Overdue Amounts filter for use in Collections. We hope you find these examples useful.  Let us know by providing feedback on the document itself or, why not post to the MOS Service Community with your experience and suggestions.

    Read the article

  • Understanding how to go from a scene to what's actually rendered to screen in OpenGL?

    - by Pris
    I want something that explains step by step how, after setting up a simple scene I can go from that 'world' space, to what's finally rendered on my screen (ie, actually implement something). I need the resource to clearly show how to derive and set up both orthographic and perspective projection matrices... basically I want to thoroughly understand what's going on behind the scenes and not plug in random things without knowing what they do. I've found lots of half explanations, presentation slides, walls of text, etc that aren't really doing much for me. I have a basic understanding of linear algebra/matrix transforms, and a rough idea of what's going on when you go from model space - screen, but not enough to actually implement it in code.

    Read the article

  • Do teams get more productive by adding more developers? [duplicate]

    - by jgauffin
    This question already has an answer here: Why does adding more resource to a late project make it later? 12 answers Suppose you've got a project that is running late. Is there any proof or argument that teams become much more productive by adding more people? I am looking for answers that can be supported by facts and references if possible. What I'm thinking about is that existing devs have to teach the new ones (thus losing overall development time), and then the new developers have to study the code (and tasks) before they can become fully productive.

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • Happy Birthday LearnVisualStudio.NET?

    - by TATWORTH
    Back in 2003, I made the changeover to Dot Net with the help of LearnViusal Studio.NET. They provide an excellent learning resource. I commend membership to you. This week only, you can get started for as little as $48.97 for a 1 Year Subscription! Save 30% at LearnVisualStudio.NET http://www.learnvisualstudio.net?awt_l=BN5TZ&awt_m=JaSOlFqKSr1QwB You can also get a Lifetime membership for only $139.97! That's over $59 in savings! A lifetime membership will grant you access to every video on the site and every video we ever create for LearnVisualStudio.NET without giving us another dime! This is a great chance to access over 900 tutorials to help you learn C#, VB, ASP NET and more. Get started today! http://www.learnvisualstudio.net?awt_l=BN5TZ&awt_m=JaSOlFqKSr1QwB

    Read the article

  • Upgrade to 11.10 left me with Kmail not working

    - by user86186
    On my way from 10.10 to 12.04 I chose to do the step by step (11.04 then 11.10 then 12.04) because I could not figure if I should do that or a direct install of 12.04LTS. After the first step (11.04) all was well. The next day I upgraded to 11.10. Now I seem to have lost my data in KOrganizer and all of my email history in KMail. KMail still has my email accounts but my tree structure of folders is missing as is all of the emails that should be there. Also, when KMail opens I get the following: KMail encountered a fatal error and will terminate now. The error was: Failed to fetch the resource collection. Clicking OK closes KMail. KAlarm appears OK and my browser history is still in place as are various data files. What to do??

    Read the article

  • ASP.NET MVC Multilingual Web Application

    - by BobhatePradip
    We are going to see how we can show localized content to your ASP.NET MVC web application. We will see mainly two approaches- Approach 1: Using Static Pages We can go for this approach only when we have few/limited static localized pages. Approach 2: Using Dynamic page with localized data at runtime We should go for this approach if we have large number of pages to show a data in localized format. In this approach we can either use resource file or directly data from database.   For details about the this check this link http://www.codeproject.com/KB/aspnet/ASP_NET_MVC_Multilingual.aspx Here you can have code sample with explanation.

    Read the article

  • C# .NET 4.0 interactive course?

    - by Kanyhalos
    I would like to learn C# programming. I already studied it for several weeks and wrote some minor programs with VS2010, and I'm not completely newbie at programming because I worked on STALKER - Shadow of Chernobyl as scripter, but it was LUA. I want to become a real programmer. I think C# is a decent way to start with. I already learned about the most commonly used resource sites and got some nice eBooks as well, but unfortunately I don't have time to sit down in front my computer all the time, so my progress is pretty slow. I would like to ask that if someone can recommend me some decent interactive online courses to make my learning progress faster. I know about the "joe grip" course but I don't know if it's worth $39 also it's only for .NET 1.x and 2.0 while I'd like to learn 4.0 so I have no idea what should I do.

    Read the article

  • Using bone joints

    - by raser
    I am trying to save bone joints to a file, and am using this format. I was wondering if anyone could clear up a few questions I have why do I need to provide rotation data for the bone, if I already gave it the location? How do I calculate the rotation of each axis if I have the relative location from the parent joint? ** EDIT ** After doing some more digging, I think that it has something to do with quaternions, so, could someone point me to a good resource on using quaternions for bone joints? ** EDIT AGAIN ** I think I've solved it, but I don't understand how it works. I can't seem to find any google results explaining it. I'd appreciate if anyone could send resources explaining it to me.

    Read the article

  • Where should I ask for feedbacks about web design? [closed]

    - by mariosangiorgio
    Possible Duplicate: Where can I get my website critiqued I am developing my personal website and I'd like to have feedbacks about its design. Is there any site/forum you would recommend me? I know that the best solution would be to hire a professional web designer and have him design my website, but I am also interested in understanding how to improve my design skills. Of course any recommended book, website, resource is more than welcome. I am not posting here the link to my home page because I think this Q/A site is more about web-development in general, but if you'd like to see my personal page and give some feedback I'll link it.

    Read the article

  • SQL Server suddenly using only a small portion of CPU.

    - by hermiod
    We've got a Windows 2008 R2 server running SQL Server 2008. All of a sudden, the SQLServer process is refusing to go above 20% CPU usage. As of last week, when running a heavy query against the db it would rise to 100% usage as I would expect. We've had this server for a while and it seems strange that it would just suddenly have this limit. This limit is causing our queries to take a lot longer than they normally would. No one has (knowingly at least) made any changes to the server configuration. After a bit of investigation, I discovered the sys.dm_os_sys_memory view. This shows 'available physical memory is high' bu at the same time the available physical memory is 339552kb where as the total is 4193848kb. It is worth noting that this is a virtual server running on vmware. Is there a setting somewhere with in SQL Server that sets the maximum CPU usage? I've found the settings in resource governor, although this is currently off as it always has been. We have recently started using Spotlight for SQL Server by Quest Software. It's playback database was located on this server for a short time this morning, I first noticed the problem shortly afterwards, although I hadn't been doing any queries prior to this so I don't know if this is the point at which the problem began, however the database was working as expected on Friday afternoon. The Windows log shows that the following settings were applied to the SpotlightPlaybackDatabase when it was created. 02/21/2011 08:45:02,spid60,Unknown,Setting database option TORN_PAGE_DETECTION to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option MULTI_USER to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option READ_WRITE to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_UPDATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CREATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option ANSI_WARNINGS to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option CONCAT_NULL_YIELDS_NULL to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option RECOVERY to SIMPLE for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option QUOTED_IDENTIFIER to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CLOSE to OFF for database SpotlightPlaybackDatabase. Could any of these settings changes modified the settings applied to the whole server?

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >