Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 130/572 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • mysql is not using multiple cpus

    - by helpmhost
    Hi, Our MySQL server has been using a lot of CPU lately (it's reached 100% several times and stays there for a while) and I noticed that it the CPU load is all on one core of one cpu. I was hoping to spread that out to all 4 on my server. I have been tweaking the MySQL settings to use more ram and less cpu, but it still occasionally reaches very high CPU usage. It seems like everything about the topic refers to thread_concurrency (which I've read is a solaris only setting). What can I do in Linux? Thanks.

    Read the article

  • Avoid Windows Explorer to load complete executable file

    - by user13001
    On Windows Vista, when browsing to a network folder containing executables, Windows Explorer seems to load all the files completely just to be able to show the executable icon (the resource monitor indicates loads of traffic during the loading of the directory) On XP only a part of the file is loaded. Is there a way to avoid the complete loading of these files? Note that disabling my anti virus does not help. Update: This only happens with for executable linked with /SWAPRUN:NET. Microsoft confirmed this as a bug in Vista, but they seem not very eager to fix this.

    Read the article

  • How do I remove 1,000,000 directories?

    - by harper
    I found that in a directory more than 1,000,000 subdirectories has been created due to a bug. I want to remove all these directories, let's say in the directory WebsiteCache. My first approach was to use the command line tool: cd WebsiteCache rmdir /Q /S . This will remove all subdirectories except the directory WebsiteCache itself, since it is the current working directory. I noticed after two hours that the directoriws starting with A-H have been removed. Why does rmdir removes the directories in alphabetical order? It must take additional effort to do this ordered. What is the fastest way to delete such an amount of directories?

    Read the article

  • Graphing per-user CPU usage on a Linux machine

    - by mart1n
    I want to graph (graphical output would be great, i.e. a .png file) the following situation: I have users A, B, and C. I limit their resources so that when all users run a CPU intensive task at the same time, those processes will use 25%, 25%, and 50% of CPU. I know I can get the real-time stats using top but have no idea what to do with them. I've searched through the huge top man page but haven't found much on the subject of outputting data that can be graphed. Ideally, the graph would show a span of maybe 30 seconds. Any ideas how to achieve this?

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

  • Server specification recommendation

    - by foo
    To cut the story short, I can't buy an item (server/cpu/motherboard/ram) that costs more than USD 330. However, I can combine them, meaning, I can buy a CPU that costs USD 330 and motherboard that costs USD 330. With this limitation, I can't buy a powerful 1U server which will definitely costs me more USD 330. With that in mind, I was hoping to build a powerful desktop PC which will be used as a database server. However, through my experience, desktop PC doesn't last very long, usually the motherboard will just die by itself after 1 or 2 years. So, what would you guys recommend me to buy with this kind of budget? Every item must be <= USD 330. Will be used as a MySQL server. RAID would be nice. 1TB is pretty big for my data. I do not need external graphic card (onboard would do just fine), mouse, keyboard, monitor. Linux friendly. One ethernet port is good enough. It's important that those hardware is made of components that will last long (at least 3 years or something). The server will be placed in an air conditioned room, but a good ventilation for the server is always preferred. I won't overclock it. Intel processor is preferred. Thanks in advance.

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • Lightweight ad-blocker for firefox

    - by student
    On a old machine (512 MB RAM) I am currently running ubuntu jaunty and firefox 3.0.15. I tried the ad blocker addon add block plus but it eats lots of RAM (300 MB). Is high memory load of this add-on a bug, which is fixed in a newer version or just normal? If so, why is the memory usage so high? Is there another ad blocker add-on for firefox or another browser- add-on combination for linux (ubuntu jaunty) which uses significant less RAM?

    Read the article

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • Can installing Linux or Grub make all of my operating systems slow?

    - by Geore Shg
    I installed Linux Mint 64-bit recently but it freezes up all the time. Not only that, but Grub wouldn't detect Windows. I thought it was just an issue with Linux Mint so I booted into Ubuntu - however that froze up as well. I downloaded Fedora to replace Linux Mint with, but that also freezes up. I finally gave up trying to repair Grub, burning a supergrub grub2 disk and using that to boot into Windows. Windows now freezes up as well. I installed a new copy of Windows on a different drive but to no avail. Right before all this started my computer was running very smoothly. I am wondering if the installation of Linux Mint, the reinstallation of Grub, or me messing with the BIOS (when I was attempting to repair Grub) could have done something drastic enough that everything is slow now? I realize that computers get gradually slower over time, but this was in no way gradual and it happened directly after the installation of Linux Mint. If so, what should I do?

    Read the article

  • Mysql process goes over 100% of CPU usage

    - by Temnovit
    Hello! I'm experiencing some problems with my LAMP server. Recently, everything became very slow, even though visitor count on my websites didn't change to much. When I run top command, it sais that mysql process has taken over 150-200% of CPU. How's that possible, I always thought that 100% is a maximum? I'm running Ubuntu 9.04 server edition with 1,5 GB RAM my.cnf settings: key_buffer = 64M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 200 table_cache = 512 table_definition_cache = 512 thread_concurrency = 2 read_buffer_size = 1M sort_buffer_size = 4M join_buffer_size = 1M query_cache_limit = 1M # the maximum size of individual query results query_cache_size = 128M Here is the output of MySQLTuner: The top command: What could be the cause of this problem? Can I make changes to my my.cnf to prevent server from hanging?

    Read the article

  • VPC on Windows 7 very slow network

    - by Shigg
    I have a Windows 2003 virtual machine which I use for website testing. I've just installed Windows 7 and am using the new version VPC (not xp mode). When I try to copy a file - I need to copy some big databases across - I get a file copy speed of about 20k per sec. Copying from one PC to another on the real network transfers files at 13mb per second. Any ideas what may be causing this? I've turned off differential network compression on win 7. The Virtual HD is on a seperate physical drive to the OS. Running Windows 7 64 bit on a dual xeon with 16gb ram and 10,000 rpm drives. Tried installing VPC 2007 but windows blocks it running saying its not compatable. Many thanks for any ideas.

    Read the article

  • Windows 7 system CPU bogged by windows services, no explanation

    - by Alex
    I'm looking at a laptop for a colleague which is running terribly slow. A quick look showed that the CPU was 100% used by 2-3 SVCHost processes, which off course doesn't tell much since those are just 'cover' processes with services running underneath them. So I fired up process explorer in hopes of finding a shady rogue service which was bogging the system, but to my suprise I found genuine MS Windows processes (or at least damn-good disguised ones) are bogging down the system: dnscache (DNS Client) IKEEXT (IKE and AuthIP IPSec Keyring modules) iphlpsvc (IP Helper) Seen separately, these processes might seem odd to be using a lot of CPU, but taking a step back one can conclude that all three services are quite closely related to networking. I've tried running: netsh int ip reset log.txt which has helped me save bizarre network-related problems in the past, but this didn't help Off course I though about a virus, but both MS Security Essentials as well as malwarebytes (let both run a full scan).

    Read the article

  • Thunderbird very slow with Gmail

    - by koskoz
    I'm using the latest version of Thunderbird with 3 Gmail accounts. Every time I launch it it seems it's downloading all my messages again. I've compacted folders (is the action working for the 3 accounts or do I need to do it for each of them?) and deleted the .msc files but nothing change. It leads to a software using a lot of bandwidth and being very slow when using it. It's a pain to write a message or even to view one. All the software is so slow I've never seen that it's almost unusable. I'm using thses addons : Dictionary Google Calendar Lightning My Gmail accounts are configured to imap.

    Read the article

  • Slow Transfer Speeds from KVM host to client

    - by indian maiden
    I am trying to isolate the root cause of slow transfer speeds from my host OS to a KVM client. Both are Linux. Rsync on the host 192.168.1.72 rsync -auv --progress rut3.img /tmp/ [54.09MB/s] Rsync to the client: rsync -auv --progress rut3.img 192.168.1.80:/tmp/ [25.52MB/s] I realize that there will be some TCP overhead on the transfer but over 50%? Can someone enlighten me on what could be slowing down the transfers so much?

    Read the article

  • MySQL server simple insert/update/delete queries are taking a long time to execute

    - by ElGabbu
    We have a VPS hosting server with a MySQL server running on it. We host several databases for client's websites. Recently we have noticed that insert/update and delete queries are taking a long time to execute sometimes as close as 30 seconds. I use the following command to see these queries being executed: watch -n1 mysqladmin proc stat We have still not been able to track the root of this problem. I would apprecite if anyone had any pointers as to what we can check or improve to resolve the issue. Thanks

    Read the article

  • Very high CPU and low RAM usage - is it possible to place some of swap some of the CPU usage to the RAM (with CloudLinux LVE Manager installed)?

    - by Chriswede
    I had to install CloudLinux so that I could somewhat controle the CPU ussage and more importantly the Concurrent-Connections the Websites use. But as you can see the Server load is way to high and thats why some sites take up to 10 sec. to load! Server load 22.46 (8 CPUs) (!) Memory Used 36.32% (2,959,188 of 8,146,632) (ok) Swap Used 0.01% (132 of 2,104,504) (ok) Server: 8 x Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Memory: 8143680k/9437184k available (2621k kernel code, 234872k reserved, 1403k data, 244k init) Linux Yesterday: Total of 214,514 Page-views (Awstat) Now my question: Can I shift some of the CPU usage to the RAM? Or what else could I do to make the sites run faster (websites are dynamic - so SQL heavy) Thanks top - 06:10:14 up 29 days, 20:37, 1 user, load average: 11.16, 13.19, 12.81 Tasks: 526 total, 1 running, 524 sleeping, 0 stopped, 1 zombie Cpu(s): 42.9%us, 21.4%sy, 0.0%ni, 33.7%id, 1.9%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8146632k total, 7427632k used, 719000k free, 131020k buffers Swap: 2104504k total, 132k used, 2104372k free, 4506644k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 318421 mysql 15 0 1315m 754m 4964 S 474.9 9.5 95300:17 mysqld 6928 root 10 -5 0 0 0 S 2.0 0.0 90:42.85 kondemand/3 476047 headus 17 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476055 headus 18 0 172m 18m 9.9m S 1.7 0.2 0:00.05 php 476056 headus 15 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476061 headus 18 0 172m 19m 10m S 1.7 0.2 0:00.05 php 6930 root 10 -5 0 0 0 S 1.3 0.0 161:48.12 kondemand/5 6931 root 10 -5 0 0 0 S 1.3 0.0 193:11.74 kondemand/6 476049 headus 17 0 172m 19m 10m S 1.3 0.2 0:00.04 php 476050 headus 15 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 476057 headus 17 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 6926 root 10 -5 0 0 0 S 1.0 0.0 90:13.88 kondemand/1 6932 root 10 -5 0 0 0 S 1.0 0.0 247:47.50 kondemand/7 476064 worldof 18 0 172m 19m 10m S 1.0 0.2 0:00.03 php 6927 root 10 -5 0 0 0 S 0.7 0.0 93:52.80 kondemand/2 6929 root 10 -5 0 0 0 S 0.3 0.0 161:54.38 kondemand/4 8459 root 15 0 103m 5576 1268 S 0.3 0.1 54:45.39 lvest

    Read the article

  • How to remove the pause during JBoss 5.1.0 GA boot between ProfileServiceBootstrap and AnnotationCreator?

    - by rrc7cz
    I've managed to strip down my JBoss profile enough that it boots in 1.5 minutes. I started with the web profile and started pulling out stuff I didn't need. The bulk of my boot time can be seen here: ... 15:21:51,890 INFO [ProfileServiceBootstrap] Loading profile: ProfileKey@86d597[domain=default, server=default, name=np] 15:22:55,406 WARN [AnnotationCreator] No ClassLoader provided, using TCCL: org.jboss.managed.api.annotation.ManagementComponent 15:22:55,578 WARN [AnnotationCreator] No ClassLoader provided, using TCCL: org.jboss.managed.api.annotation.ManagementComponent ... Does anyone have any idea what JBoss is doing here for 1 minute? If so, is there any way to speed it up or skip it entirely? This is for developer instances, so boot time is quite important.

    Read the article

  • Why is Cuteflow slow on XAMPP 1.7.1?

    - by gsk
    I have recently installed cuteflow (a PHP based document circulation application) on my machine as I need to customize this software. I have XAMPP 1.7.1 running on a windows xp machine on which this application is deployed. While all my other applications that are running on XAMPP are loading fast, only this application is taking exorbitant amount of time. The same cuteflow application is running very well on my colleagues machines.

    Read the article

  • Need reasonably priced router with QoS support [closed]

    - by ULTRA_POROV
    I dont need wireless. I am expecting very heavy traffic, with possibly thousands of tcp connections open at one time. This would require that the router has good hardware. I also need to limit the different services i will provide. Lets say i need to guarantee 60% of all the bandwidth to HTTP, 10% FTP, and 10% for Mail... So the router software must have flexible QoS options as well. I don't know which one to chooose, because this information is usually not given on the router specs.

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • IIS6 - Classic ASP - "out of memory"/"out of string space"

    - by glaucon
    We have a classic ASP application that's under significantly more load than usual. We are from time to time been getting "out of memory" and "out of string space" in the httperr. We do not usually see these errors. For the moment we cannot change the application. Is there anything we can do to the IIS config which will help to reduce or stop these errors occurring ? The application pool is set to default values currently.

    Read the article

  • Is the XP VMM a bottleneck on a multi core machine?

    - by JeffV
    I have a dual Xeon hex core machine running an IO intensive application. (WinXP 32) I am seeing a hardware driver (1/2 user mode, 1/2 kernel, streaming data) that is using 6k delta page faults per second. When other applications load or allocate large amounts of memory the driver's hardware buffer gets an underrun (application not feeding it fast enough). Could this be because the kernel is only using one core to service page fault interrupts?

    Read the article

  • Very high Magento/Apache memory usage even without visitors (are we fooled by our hosting company?)

    - by MrDobalina
    I am no server guy and we have issues with our speed so I come here asking for advise. We have a VPS with 2 cores and 2gb of RAM at a Magento specialized hosting company. Over the course of the last weeks our site speed has gotten worse, even though our store is new, has less than 1000 SKUs and not even 100 visitos a day. At magespeedtest.com we only get 1.87 trans/sec @ 2.11 secs each with a mere 5 concurrent users. Our magento log files are clean, we have no huge database tables or anything like that. When we take a look at our server real time stats, we see that the memory usage jumped up from about 34% to 71% and now 82% in just a few days in idle, with no visitors on the site. Our hosting company said that we do not need to worry about that as it`s maybe related to mysql which creates buffers (which are maybe not even actually being used) and what is important is CPU and swap - stats are ok here. They also said that the low benchmark scores are caused by bad extensions or template modifications on our side. We are not sure if we can trust that statement as we only have 4 plugins installed (all from aheadworks and amasty which are known to be one of the best magento extension developers). Our template modifications are purely html and css, no modifications to the php code. Our pagespeed is ranked with 93/100 in firebug and Magento is properly configured, so the problem really just gets obvious when there are a handful of users on the site at the same time. Can anyone confirm our hosting`s statement about memory usage and where can I start looking for a solution?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >