Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 23/572 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Find out which task is generating a lot of context switches on linux

    - by Gaks
    According to vmstat, my Linux server (2xCore2 Duo 2.5 GHz) is constantly doing around 20k context switches per second. # vmstat 3 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 7292 249472 82340 2291972 0 0 0 0 0 0 7 13 79 0 0 0 7292 251808 82344 2291968 0 0 0 184 24 20090 1 1 99 0 0 0 7292 251876 82344 2291968 0 0 0 83 17 20157 1 0 99 0 0 0 7292 251876 82344 2291968 0 0 0 73 12 20116 1 0 99 0 ... but uptime shows small load: load average: 0.01, 0.02, 0.01 and top doesn't show any process with high %CPU usage. How do I find out what exactly is generating those context switches? Which process/thread? I tried to analyze pidstat output: # pidstat -w 10 1 12:39:13 PID cswch/s nvcswch/s Command 12:39:23 1 0.20 0.00 init 12:39:23 4 0.20 0.00 ksoftirqd/0 12:39:23 7 1.60 0.00 events/0 12:39:23 8 1.50 0.00 events/1 12:39:23 89 0.50 0.00 kblockd/0 12:39:23 90 0.30 0.00 kblockd/1 12:39:23 995 0.40 0.00 kirqd 12:39:23 997 0.60 0.00 kjournald 12:39:23 1146 0.20 0.00 svscan 12:39:23 2162 5.00 0.00 kjournald 12:39:23 2526 0.20 2.00 postgres 12:39:23 2530 1.00 0.30 postgres 12:39:23 2534 5.00 3.20 postgres 12:39:23 2536 1.40 1.70 postgres 12:39:23 12061 10.59 0.90 postgres 12:39:23 14442 1.50 2.20 postgres 12:39:23 15416 0.20 0.00 monitor 12:39:23 17289 0.10 0.00 syslogd 12:39:23 21776 0.40 0.30 postgres 12:39:23 23638 0.10 0.00 screen 12:39:23 25153 1.00 0.00 sshd 12:39:23 25185 86.61 0.00 daemon1 12:39:23 25190 12.19 35.86 postgres 12:39:23 25295 2.00 0.00 screen 12:39:23 25743 9.99 0.00 daemon2 12:39:23 25747 1.10 3.00 postgres 12:39:23 26968 5.09 0.80 postgres 12:39:23 26969 5.00 0.00 postgres 12:39:23 26970 1.10 0.20 postgres 12:39:23 26971 17.98 1.80 postgres 12:39:23 27607 0.90 0.40 postgres 12:39:23 29338 4.30 0.00 screen 12:39:23 31247 4.10 23.58 postgres 12:39:23 31249 82.92 34.77 postgres 12:39:23 31484 0.20 0.00 pdflush 12:39:23 32097 0.10 0.00 pidstat Looks like some postgresql tasks are doing 10 context swiches per second, but it doesn't all sum up to 20k anyway. Any idea how to dig a little deeper for an answer?

    Read the article

  • Changing memory allocator to Jemalloc Centos 6

    - by Brian Lovett
    After reading this blog post about the impact of memory allocators like jemalloc on highly threaded applications, I wanted to test things on a larger scale on some of our cluster of servers. We run sphinx, and apache using threads, and on 24 core machines. Installing jemalloc was simple enough. We are running Centos 6, so yum install jemalloc jemalloc-devel did the trick. My question is, how do we change everything on the system over to using jemalloc instead of the default malloc built into Centos. Research pointed me at this as a potential option: LD_PRELOAD=$LD_PRELOAD:/usr/lib64/libjemalloc.so.1 Would this be sufficient to get everything using jemalloc?

    Read the article

  • Benchmark for website speed optimization

    - by gowri
    I working on website speed optimization. I mostly used 3 tools for analyzing speed of optimization. Speed analyzing Tools: Google pagespeed tool Yslow Firefox extenstion Web Page Performance Test I am measuring performance using above tool and benchmark result as below like before and after. Before optimization : Google PageSpeed Insights score : 53/100 Web Page Performance Test : 55/100 (First View : 10.710s, Repeat view : 6.387s ) Yahoo Overall performance score : 68 Stage 1 After optimization : Google PageSpeed Insights score : 88/100 Web Page Performance Test : 88/100 (First View : 6.733s, Repeat view : 1.908s ) Yahoo Overall performance score : 80 My question is ? Am i doing correct way ? What is the best way of benchmark for speed optimization ? Is there any standard ? Is there any much better tool for analyzing speed ?

    Read the article

  • ATG Live Webcast March 21 Reminder: Network, WAN, and PC Performance Tuning (Performance Series Part 3 of 3)

    - by BillSawyer
    A quick reminder about tomorrow's webcast:  Andy Tremayne, Senior Architect, Applications Performance, and co-author of Oracle Applications Performance Tuning Handbook from Oracle Press, and Uday Moogala, Senior Principal Engineer, Applications Performance, will discuss network performance for E-Business Suite. Andy and Uday will cover tuning the client and tuning the network. They will share real-life examples of network performance, and show you tools and techniques that you can use to estimate or simulate performance on your own network.The agenda for the Performance Tuning - Part 3 of 3 webcast includes the following topics: Tuning the Client Tuning the Network Date:               Thursday, March 21, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenters:  Andy Tremayne, Senior Architect, Applications Performance                        Uday Moogala, Senior Principal Engineer, Applications PerformanceWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99341To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  591264961If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.

    Read the article

  • How to select a server that supports Windows scheduled file IO

    - by Kristof Verbiest
    Background: I am developing an application that needs to read data from disk with a fairly consistent throughput. It is important that this throughput is not influenced by other actions that happen on the disk (e.g. by other processes). For this purpose, I was hoping to use the 'Scheduled File I/O' feature in Windows (throught the GetFileBandwithReservation and SetFileBandwithReservation functions). However, this StackOverflow question has thought me that this feature is only available if the device driver supports it. Currently I have no computer at my disposition that seems to support this feature (I have an HP Proliant server and a Dell Precision workstation). Question: If I were to order a new server, how can I know beforehand if this feature will be supported by the device driver? How 'upscale' does the server have to be? Has anybody used this feature with success and cares to share his experiences?

    Read the article

  • Which will give more free RAM to linux?

    - by Linda Thomas
    Trying to avoid some issues so I've been trying to learn vm. in kernel tuning but still a little confused even after googling. The lower background_ratio is the sooner the flushes? the lower dirty_ratio is the less dirty ram that is kept, right vm.dirty_ratio = 20 vm.dirty_background_ratio = 1 or vm.dirty_ratio = 60 vm.dirty_background_ratio = 20 or vm.dirty_ratio = 20 vm.dirty_background_ratio = 10 or vm.dirty_ratio = 20 vm.dirty_background_ratio = 5

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • How to check for bottlenecks in Windows Server 2008 R2

    - by Phil Koury
    I recently switched out a 10 year old server for a brand new server in a small office and upgraded from Windows Server 2000 to Windows Server 2008 R2. After the switch was complete and some configurations were changed around we are running into what appear to be some bottlenecks in the network speed. Accessing programs on the server is slower (resulting in long loading times, slower report generation, etc.) than it was on the old server hardware. I am wondering what options or tools I have, if there are any at all, to find out exactly where these hang ups might be coming from.

    Read the article

  • How to implement a game launch counter in LibGDX

    - by Vishal Kumar
    I'm writing a game using LibGDX in which I want to save the number of launches of a game in a text file. So, In the create() of my starter class, I have the following code ..but it's not working public class MainStarter extends Game { private int count; @Override public void create() { // Set up the application AppSettings.setUp(); if(SettingsManager.isFirstLaunch()){ SettingsManager.createTextFileInLocalStorage("gamedata"); SettingsManager.writeLine("gamedata", "Launched:"+count ,FileType.LOCAL_FILE ); } else{ SettingsManager.writeLine("gamedata", "Not First launch :"+count++ ,FileType.LOCAL_FILE ); } // // Load assets before setting the screen // ##################################### Assets.loadAll(); // Set the tests screen setScreen(new MainMenuScreen(this, "Main Menu")); } } What is the proper way to do this?

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • Facebook Share Button and Counter no longer displaying any Count

    - by donaldthe
    Is it just me or did the Facebook Share button that displays the count of shares and likes just stop working over the past few days? The sharing still works, the count of shares no longer displays. The link that is generated look like this http://www.facebook.com/sharer.php?u= and the JavaScript file on my page is this http://static.ak.fbcdn.net/connect.php/js/FB.Share I haven't changed anything and this has worked for years

    Read the article

  • Understanding MySQL Query Caches and when to implement it?

    - by Jeff
    On our current MySQL server query cache is enabled. Qchache_hits: 31913 Qchache_inserts: 50959 Qchache_lowmem_prunes: 9320 Qchache_not_chached: 209320 Qchache_queries_in_chace: 986 com_update: 0 com_delete: 0 I do not fully understand the Query cache - I am reading about it currently and trying to understand it. Our database holds inventory data, customer data, employee data, sales data and so forth. The query is very rarely run more than once. The possibility of a query being run twice is viewing a specific sales information twice. But basically everything in our system changes constantly. It is always being updated, deleted, insterted and off the top of my head I can't picture users running the same query twice within a week. Do I even need to have the query cache enabled? I am guessing that the inserts means 51k entries have been added, but only 986 of those are being stored? Would an idea be to refresh the cache, and watch it for a week and check how many of the queries in cached are accessed maybe on a weekly basis to see if it is actually returning any benefits? Any help/guidance on this is appreciated, thanks

    Read the article

  • Triggering State Changes with Health Counter

    - by Hairgami_Master
    I'm developing a game where the player changes states as their health decreases. Below 50, it should trigger animation1. Below 30, it should trigger animation2. The problem is, I only want to trigger animation1 once. But my game timer is checking every "frame", so it's triggering animation1 every cycle below 50. I only want it to trigger once, then not again until it's gone over 50 and then naturally decreased back to below 50. Are there any tried and true strategies for triggering state changes as a timer counts down (without the over-triggering problem)? I thought I could say: if (health == 50) animation1.play(); but sometimes, health never equals exactly 50, so it will skip right past that statement.

    Read the article

  • Performance implications of Synchronous Sockets vs Asynchronous Sockets

    - by Akash Kava
    We are trying to build an SMTP Server to receive mail notifications from various clients over internet. As each of the communication will be longer and it needs to log everything, doing this Asynchronous way is little challenging as well as by using Socket's Asynchronous methods we are not sure of how flow of control and error handling happens. Previously we wrote lot of server/client apps but we always used Synchronous sockets, reason being they are longer sessions and each session also has lot of local data to manage and parsing messages etc. Does anyone have any experience over real performance differences between these two methods? Async calls use ThreadPool which we have experienced many times to just die for no reason. And we fail to restart threadpool etc. In one way Request-Response protocol of HTTP, Async Sockets makes sense, but SMTP/IMAP etc protocols are longer and they have interleaved messages plus state machine of server. So Async methods are really complicated to program. However if anyone can share the performance of Sockets, it will be helpful.

    Read the article

  • SQL Server 2005 high memory usage and performance problems

    - by emzero
    Hi there guys. I have this ASP.NET/SQLServer2005 website running on a production server (Win2003, QuadCore, 4GB). The site runs smoothly normally, but after 2-3 weeks I notice a slow performance on the site (especifically in one particular page). Also I notice that the SQL Server process is using like 2GBs of RAM. So I restart the service, the site runs fast again and the process 300-400MBs. I'm looking for an explanation of why is this happening? What is SQL Server storing in RAM that takes too much space and degrades the performance? What can I do to avoid this? I'm trying to avoid restarting the SQLServer everytime this happens. Thank you!

    Read the article

  • WCF performance improvements

    - by Burt
    I am developing a WPF application that talks to a server via WCF services over the internet. After profiling the application I noticed a lot of time is being taking up by creating the appropriate WCF client proxy and making the call to the server. The code on the server is optimised and doesn't take any time to run yet I am still seeing a 1.5 second delay from when a service is invloked to it returning to the client. A few points to give a bit of background: I am using the ASP.Net membership for security I will eventually hook into the same server side code through a website I would eventually like to have offline support in the application I really need to nail the performance early though as if the app is taking a couple of seconds to come back it is too long for what I am trying to do. Can anyone suggest performance tips that will help me please?

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >