Search Results

Search found 24220 results on 969 pages for 'performance tools'.

Page 24/969 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • VMWare Esxi Looking for Bottlenecks

    - by nextgenneo
    I have a VMWare ESxi box, 22GB ram, Dual Quad Core Xeon, 2 Sas drives + Write caching raid controller etc. Anyways, have about 30 small XP VM's running on it and starting to get some very slow boot times and other performance issues. I THINK its I/O but looking at the graphs not too sure what to look for. Any ideas on what to look for would be appreciated. Here is the data I've got so far: (I feel like my IO is high but not sure what to bench it against)

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • MySQL performance over a (local) network much slower than I would expect

    - by user15241
    MySQL queries in my production environment are taking much longer than I would expect them too. The site in question is a fairly large Drupal site, with many modules installed. The webserver (Nginx) and database server (mysql) are hosted on separated machines, connected by a 100mbps LAN connection (hosted by Rackspace). I have the exact same site running on my laptop for development. Obviously, on my laptop, the webserver and database server are on the same box. Here are the results of my database query times: Production: Executed 291 queries in 320.33 milliseconds. (homepage) Executed 517 queries in 999.81 milliseconds. (content page) Development: Executed 316 queries in 46.28 milliseconds. (homepage) Executed 586 queries in 79.09 milliseconds. (content page) As can clearly be seen from these results, the time involved with querying the MySQL database is much shorter on my laptop, where the MySQL server is running on the same database as the web server. Why is this?! One factor must be the network latency. On average, a round trip from from the webserver to the database server takes 0.16ms (shown by ping). That must be added to every singe MySQL query. So, taking the content page example above, where there are 517 queries executed. Network latency alone will add 82ms to the total query time. However, that doesn't account for the difference I am seeing (79ms on my laptop vs 999ms on the production boxes). What other factors should I be looking at? I had thought about upgrading the NIC to a gigabit connection, but clearly there is something else involved. I have run the MySQL performance tuning script from http://www.day32.com/MySQL/ and it tells me that my database server is configured well (better than my laptop apparently). The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

    Read the article

  • Standardizing a Release/Tools group on a specific language

    - by grahzny
    I'm part of a six-member build and release team for an embedded software company. We also support a lot of developer tools, such as Atlassian's Fisheye, Jira, etc., Perforce, Bugzilla, AnthillPro, and a couple of homebrew tools (like my Django release notes generator). Most of the time, our team just writes little plugins for larger apps (ex: customize workflows in Anthill), long-term utility scripts (package up a release for QA), or things like Perforce triggers (don't let people check into a specific branch unless their change description includes a bug number; authenticate against Active Directory instead of Perforce's internal passwords). That's about the scale of our problems, although we sometimes tackle something slightly more sizable. My boss, who is reasonably technical, has asked us to standardize on one or two languages so we can more easily substitute for each other. He's advocating bash scripts and Perl, due to their universality and simplicity. I can see his point--we mostly do "glue", so why not use "glue" languages rather than saddle ourselves with something designed for much larger projects? Since some of the tools we work with are Java-based, we do need to use something that speaks JVM sometimes. (The path of least resistance for these projects is BeanShell and Groovy.) I feel a tremendous itch toward language advocacy, but I'm trying to avoid saying "We should use Python 'cause I like it and Perl is gross." Instead, I'm trying to come up with a good approach to defining our problem set: what problems do we solve with scripts? Would we benefit from a library of common functions by our team, or are most of our projects more isolated? What is it reasonable to expect my co-workers to learn? What languages give us the most ease of development and ease of modification? Can you folks suggest some useful ways to approach this problem, both for my own thinking process and to help me facilitate some brainstorming among my coworkers?

    Read the article

  • Can the Visual Studio (2010) Command Window handle "external tools" with project/solution relative p

    - by ee
    I have been playing with the Command Window in Visual Studio (View-Other Windows-Command Window). It is great for several mouse-free scenarios. (The autocompleting file "Open" command rocks in a non-trivial solution.) That success got me thinking and experimenting: Possibility 1.1: You can use the Alias commands to create custom commands Possibility 1.2: You can use the Shell command to run arbitrary executables and specify parameters (and pipe the result to the output or command windows) Possibility 2: A previously setup external tool definition (with project-relative path variables) could be run from the command window What I am stuck on is: There doesn't appear to be a way to send parameters to an aliased command (and thus the underlying Shell call) There doesn't appear to be a way to use project/solution relative paths ($SolutionDir/$ProjectDir) on a Shell call Using absolute paths in Shell works, but is fragile and high-maintenance (one alias for each needed use case). Typically you want the command to run against a file relative to your project/solution. It seems you can't run the traditional external tools (Tools-External Tools...) in the command window Ultimately I want the external tool functionality in the command window in some way. Can anyone see a way to do this? Or am I barking up the wrong tree? So my questions: Can an "external tool" of some sort (using relative project/solution path parameters) be used in the Command Window? If yes, How? If no, what might be a suitable alternative?

    Read the article

  • Mobile Chrome Office Hours: Tools for Mobile Web Development

    Mobile Chrome Office Hours: Tools for Mobile Web Development Ask and vote for questions at: goo.gl Are you building for the mobile web? Are you looking for easier and better tools to help you create great experiences? Join Boris Smus and Pete LePage as they show you some of the many tools available to mobile web developers. We'll take a look Chrome's remote debugging features, some of the emulation tools available to you within Chrome and take a deep dive into some of the advanced use cases of these tools to help you build for the mobile web. From: GoogleDevelopers Views: 1432 60 ratings Time: 42:16 More in Science & Technology

    Read the article

  • Google I/O 2010 - Google Chrome's developer tools

    Google I/O 2010 - Google Chrome's developer tools Google I/O 2010 - Google Chrome's developer tools Chrome 101 Pavel Feldman, Anders Sandholm In this session we'll give an overview of Developer Tools for Google Chrome that is a part of the standard Chrome distribution. Chrome Developer Tools allow inspecting, debugging and tuning the web applications and many more. In addition to this overview we would like to share some implementation details of the Developer Tools features and call for your contribution. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 43:30 More in Science & Technology

    Read the article

  • Benefits and features of different requirements-management systems and tools available?

    - by DevDevDave
    I am looking for a good comparision of different available professional requirement management tools. I am especially interested in the the features available within the different software solutions. Additionally to the "obvious" features I am looking for a proffesional Requirement Management System that supports for: multi-lingual customizable generation of documentation & history (graphs) search features (e.g. fulltext for comments), ordering, priorities version history bi-directional traceability of changes, artefacts, requirements, changes in requirements, etc. Any kind of integration of V-Model XT would be a really-nice-to-have-feature. Besides, I'd like to hear any personal motivated recommendations and/or experiences with different requirement management systems. Any input is highly appreciated. Content consulted: similar question reqm tool with v-model ToolsJournal.com nice, but too old paper (pdf)

    Read the article

  • Google I/O 2012 - Chrome Developer Tools Evolution

    Google I/O 2012 - Chrome Developer Tools Evolution Sam Dutton, Pavel Feldman Web app development moves fast and Chrome Developer Tools is still keeping you one step ahead. If you know your way around the Dev Tools and would like to take your skills to a higher level, this session will kick your productivity into overdrive. Since last year's installment, we've added a whole slew of features that empower developers to make rich web apps, so in this demo-rich session we'll explain how to use those tools to develop and debug on mobile and desktop. We'll take you jank hunting with the new timeline, delve into minified JavaScript via Source Maps, debug Web Workers, and much more. Join us and learn what Chrome Developer Tools can do for you. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1722 36 ratings Time: 59:41 More in Science & Technology

    Read the article

  • while installing kde 4.11 through terminal showing error while 'processing initramfs-tools'

    - by saptarshi nag
    the full output is------- Setting up initramfs-tools (0.103ubuntu0.2.1) ... update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.5.0-42-generic cp: cannot stat ‘/module-files.d/libpango1.0-0.modules’: No such file or directory cp: cannot stat ‘/modules/pango-basic-fc.so’: No such file or directory E: /usr/share/initramfs-tools/hooks/plymouth failed with return 1. update-initramfs: failed for /boot/initrd.img-3.5.0-42-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Authorship-verified website not included in "Author Stats" of Google Webmaster Tools?

    - by Yosi Mor
    In Google Webmaster Tools, is it normal for a website for which the Structured Data Testing Tool shows that "Authorship is working for this webpage" -- to not be listed in the "Author Stats" section (under "Labs")? I already understand that successful verification using the Structured Data Testing Tool does not guarantee that Google will actually display authorship in the SERPs, and that Google decides this at its own discretion. However, shouldn't such successful verification at least guarantee that the website is included in the "Author Stats" section (which purportedly covers "pages for which you are the verified author")? I would have assumed that, if Google is not yet displaying authorship for that site, it would show both its Impressions and Clicks as being "<10". Are my assumptions incorrect?

    Read the article

  • How can I set parameters in Google webmaster tools so that my dynamic content is indexed?

    - by Werewolf
    I have read questions about URL parameters in Google Webmaster Tools in this site and the Google Webmaster Help Center but I have a problem. My site searches in the database and show some information. These two URL display some data: http://mydomain.com/index.aspx?category=business http://mydomain.com/index.aspx?category=graphic&City=Paris In URL parameter section, I can only define parameter category, how Google can detect proper values (business, graphics, real estate...)? Every word is not valid for search. If My page name is default.aspx or anything else, where I should define it? If I use URL rewriting like http://mydomain.com/search/category/business, my settings must change?

    Read the article

  • How to install vmware tools?

    - by Tom
    I installed my Ubuntu in vmware, no I need install vmware tools, I got error: Searching for a valid kernel header path... The path "" is not valid. Would you like to change it?[yes] In CentOS, I run the following command to resolve this issue: yum install gcc-c++ yum install kernel-devel yum install kernel-headers yum -y update kernel But I don't know how to do in Ubuntu. Please help. Update I have tried the following command but nothing changed,still got error: Searching for a valid kernel header path... The path "" is not valid. Would you like to change it?[yes] sudo apt-get update sudo-get install build-essential linux-header-$(uname -r) sudo ./vmware-uninstall-tools.pl sudo ./vmware-config-tools.pl sudo ./vmware-install.pl Issue Changed: Run sudo ./vmware-uninstall-tools.pl, and delete the folder of /etc/vmware-tools then, run sudo ./vmware-install.pl Now I can successfully install vmware-tool.After restart, I can see folder of /mnt/hgfs, but can't see my shared folder.

    Read the article

  • Best in-depth analytics or stats tools? (preferrably server-side)

    - by Litso
    Hey all, I know there's been questions about this before, but mine is a little more specific. I work for a high traffic website and we want to start tracking our visitors better. Unfortunately, Google Analytics is not an option at the moment, so what I'm looking for is some alternatives, preferrably server-side (but not necessarily). We're currently running Urchin, but what I'm missing most there is the way you can set conversions in Analytics and then track (for example) which keywords convert better or which landing pages convert better. Also, A/B testing is something I really miss. Which analytics tools can be compared to analytics in terms of advanced segmentation, navigation summaries, A/B testing, etc?

    Read the article

  • Adding users to multiple/all sites in Google Webmaster Tools?

    - by Christian
    I didn't find an answer to this, but maybe I didn't use the right terms for my search. So I'm sorry if this is a duplicate. Anyway, my situation is this: the company I work at manages a lot of sites (100+), and we've recently put them all into Google Webmaster Tools under my Google account, which was tedious enough. Now two coworkers are supposed to be added as users for each site, so they can see the data and manage stuff there as well. But I can only find an option to add users for a single site, not for all sites that are currently associated with my account. Do I really have to go through more than a hundred sites one by one and add the two users to each of them, or is there some way to add both users to all/multiple sites at once?

    Read the article

  • Performance Tuning a High-Load Apache Server

    - by futureal
    I am looking to understand some server performance problems I am seeing with a (for us) heavily loaded web server. The environment is as follows: Debian Lenny (all stable packages + patched to security updates) Apache 2.2.9 PHP 5.2.6 Amazon EC2 large instance The behavior we're seeing is that the web typically feels responsive, but with a slight delay to begin handling a request -- sometimes a fraction of a second, sometimes 2-3 seconds in our peak usage times. The actual load on the server is being reported as very high -- often 10.xx or 20.xx as reported by top. Further, running other things on the server during these times (even vi) is very slow, so the load is definitely up there. Oddly enough Apache remains very responsive, other than that initial delay. We have Apache configured as follows, using prefork: StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 And KeepAlive as: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 Looking at the server-status page, even at these times of heavy load we are rarely hitting the client cap, usually serving between 80-100 requests and many of those in the keepalive state. That tells me to rule out the initial request slowness as "waiting for a handler" but I may be wrong. Amazon's CloudWatch monitoring tells me that even when our OS is reporting a load of 15, our instance CPU utilization is between 75-80%. Example output from top: top - 15:47:06 up 31 days, 1:38, 8 users, load average: 11.46, 7.10, 6.56 Tasks: 221 total, 28 running, 193 sleeping, 0 stopped, 0 zombie Cpu(s): 66.9%us, 22.1%sy, 0.0%ni, 2.6%id, 3.1%wa, 0.0%hi, 0.7%si, 4.5%st Mem: 7871900k total, 7850624k used, 21276k free, 68728k buffers Swap: 0k total, 0k used, 0k free, 3750664k cached The majority of the processes look like: 24720 www-data 15 0 202m 26m 4412 S 9 0.3 0:02.97 apache2 24530 www-data 15 0 212m 35m 4544 S 7 0.5 0:03.05 apache2 24846 www-data 15 0 209m 33m 4420 S 7 0.4 0:01.03 apache2 24083 www-data 15 0 211m 35m 4484 S 7 0.5 0:07.14 apache2 24615 www-data 15 0 212m 35m 4404 S 7 0.5 0:02.89 apache2 Example output from vmstat at the same time as the above: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 8 0 0 215084 68908 3774864 0 0 154 228 5 7 32 12 42 9 6 21 0 198948 68936 3775740 0 0 676 2363 4022 1047 56 16 9 15 23 0 0 169460 68936 3776356 0 0 432 1372 3762 835 76 21 0 0 23 1 0 140412 68936 3776648 0 0 280 0 3157 827 70 25 0 0 20 1 0 115892 68936 3776792 0 0 188 8 2802 532 68 24 0 0 6 1 0 133368 68936 3777780 0 0 752 71 3501 878 67 29 0 1 0 1 0 146656 68944 3778064 0 0 308 2052 3312 850 38 17 19 24 2 0 0 202104 68952 3778140 0 0 28 90 2617 700 44 13 33 5 9 0 0 188960 68956 3778200 0 0 8 0 2226 475 59 17 6 2 3 0 0 166364 68956 3778252 0 0 0 21 2288 386 65 19 1 0 And finally, output from Apache's server-status: Server uptime: 31 days 2 hours 18 minutes 31 seconds Total accesses: 60102946 - Total Traffic: 974.5 GB CPU Usage: u209.62 s75.19 cu0 cs0 - .0106% CPU load 22.4 requests/sec - 380.3 kB/second - 17.0 kB/request 107 requests currently being processed, 6 idle workers C.KKKW..KWWKKWKW.KKKCKK..KKK.KKKK.KK._WK.K.K.KKKKK.K.R.KK..C.C.K K.C.K..WK_K..KKW_CK.WK..W.KKKWKCKCKW.W_KKKKK.KKWKKKW._KKK.CKK... KK_KWKKKWKCKCWKK.KKKCK.......................................... ................................................................ From my limited experience I draw the following conclusions/questions: We may be allowing far too many KeepAlive requests I do see some time spent waiting for IO in the vmstat although not consistently and not a lot (I think?) so I am not sure this is a big concern or not, I am less experienced with vmstat Also in vmstat, I see in some iterations a number of processes waiting to be served, which is what I am attributing the initial page load delay on our web server to, possibly erroneously We serve a mixture of static content (75% or higher) and script content, and the script content is often fairly processor intensive, so finding the right balance between the two is important; long term we want to move statics elsewhere to optimize both servers but our software is not ready for that today I am happy to provide additional information if anybody has any ideas, the other note is that this is a high-availability production installation so I am wary of making tweak after tweak, and is why I haven't played with things like the KeepAlive value myself yet.

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • How to squeeze the maximum performance out of Unity and GNOME 3?

    - by melvincv
    I see that I do not get good performance with the new Unity desktop, but I should say that Unity has improved a lot since the last edition Ubuntu 11.10. How to squeeze the maximum performance out of 1. Unity 2. GNOME 3 My system specs: -Processors- Intel(R) Pentium(R) Dual CPU E2180 @ 2.00GHz -Memory- Total Memory : 2049996 kB -PCI Devices- Host bridge : Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) PCI bridge : Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) (prog-if 00 [Normal decode]) VGA compatible controller : Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) (prog-if 00 [VGA controller]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) PCI bridge : Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) ISA bridge : Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) IDE interface : Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) (prog-if 8a [Master SecP PriP]) IDE interface : Intel Corporation N10/ICH7 Family SATA Controller [IDE mode] (rev 01) (prog-if 8f [Master SecP SecO PriP PriO]) SMBus : Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) Ethernet controller : Intel Corporation PRO/100 VE Network Connection (rev 01)

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • JQuery Tools Overlay for modal dialog broken under IE8

    - by Gary McGill
    I've been developing a website that has several modal dialog boxes. I've been using jQuery Tools Overlay for the dialog boxes. However, I've just discovered that it doesn't seem to work properly on IE8. In Chrome (and I presume other browsers), the dialog is highlighted by darkening the rest of the page "below" it, but on IE8 the page "below" is obliterated - all you get is the dialog on a black background. This appears to be nothing to do with the way I've configured it - the same problem is evident on the jQuery Tools website itself. If you click the link above and then click one of the two buttons headed "For User Interactions", then you'll see what I mean. What's the deal? Does it simply not support IE8? If so, (a) grrrr... and (b) what else should I use?

    Read the article

  • Agile sysadmin and devops - How to accomplish?

    - by Marco Ramos
    Nowadays, agile systems adminitration and devops are some of the most trending topics regarding systems administration and operations. Both these concepts are mainly focused on bridging the gap between operations/sysadmins and the projects (developers, business, etc). Even if you never heard of the devops concept, I'm sure that this topic is your concern too. So, what tools and techniques do you use to accomplish devops in you companies? I'm particularly interested in topics like change management, continous integration and automatization, but not only. Please share your thoughts. I'm looking forward to read your answers/opinions :)

    Read the article

  • How do I objectively measure an application's load on a server

    - by Joe
    All, I'm not even sure where to begin looking for resources to answer my question, and I realize that speculation about this kind of thing is highly subjective. I need help determining what class of server I should purchase to host a MS Silverlight application with a MSSQL server back-end on a Windows Server 2008 platform. It's an interactive program, so I can't simply generate a list of URLs to test against, and run it with 1000 simultaneous users. What tools are out there to help me determine what kind of load the application will put on a server at varying levels of concurrent users? Would you all suggest separating the SQL server form the web server, to better differentiate the generated load on the different parts of the stack?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >