Search Results

Search found 218 results on 9 pages for 'sy moen'.

Page 6/9 | < Previous Page | 2 3 4 5 6 7 8 9  | Next Page >

  • PHP running too slow, always showing "504 Gateway Time-out"

    - by komase
    PHP running too slow, always showing "504 Gateway Time-out" My server spec: Dual core ATOM 330 CPU 2GB RAM Use nginx with PHP in fastcgi use eaccelerator CPU 74.3%id RAM used: 350MB of 2GB I have lots of sites in my server, with cron running every minutes all time, even on some minutes, double or triple cron running at same time. All my sites cron is heavy, usually the cron running more than one minutes. my nginx.conf has become too big until nginx refuse to start because too many sites in it. it has been solved by increasing server_names_hash_max_size. Im planning to add more sites in my server Now, opening my website always showing 504 Gateway Time-out. I have tested many eaccelerator and PHP setting, but this 504 Gateway Time-out still happen. the 504 Gateway Time-out will dissappeared when cron is disabled I have no idea: is this because not enough processor power? And what should I do? upgrade my processor? --------added this is top for my CPU just now: Cpu(s): 17.5%us, 3.8%sy, 0.1%ni, 71.6%id, 6.9%wa, 0.1%hi, 0.1%si, 0.0%st

    Read the article

  • Apache Consuming Resources

    - by Chris Edwards
    Our web server suddenly has been giving us load issues. After I restart Apache the load stays low for a few hours up to a day or so then its back up to around 3.0 until I restart Apache again. Any suggestions on tracking down what is causing this? Thanks! Chris Edwards top - 20:15:05 up 19 days, 10:59, 1 user, load average: 2.11, 2.17, 2.47 Tasks: 532 total, 6 running, 525 sleeping, 0 stopped, 1 zombie Cpu(s): 11.5%us, 0.4%sy, 0.0%ni, 88.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32842656k total, 13185872k used, 19656784k free, 6143740k buffers Swap: 1048568k total, 0k used, 1048568k free, 3515252k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19089 apache 20 0 1912m 1.5g 6584 R 99.6 4.9 71:01.53 /usr/sbin/httpd 21136 apache 20 0 392m 55m 5736 R 95.0 0.2 0:03.45 /usr/sbin/httpd 21139 apache 20 0 374m 38m 5808 S 40.5 0.1 0:04.91 /usr/sbin/httpd 21124 apache 20 0 389m 51m 5948 R 38.9 0.2 0:03.15 /usr/sbin/httpd 21111 apache 20 0 371m 35m 5964 S 18.8 0.1 0:01.22 /usr/sbin/httpd 21127 apache 20 0 375m 39m 5832 S 17.8 0.1 0:01.66 /usr/sbin/httpd 21128 apache 20 0 374m 38m 5792 S 16.2 0.1 0:01.56 /usr/sbin/httpd 21110 apache 20 0 374m 38m 5848 S 15.9 0.1 0:01.02 /usr/sbin/httpd 21113 apache 20 0 374m 38m 5836 S 15.9 0.1 0:02.16 /usr/sbin/httpd 21077 apache 20 0 379m 43m 6408 S 11.0 0.1 0:07.22 /usr/sbin/httpd 21101 apache 20 0 384m 49m 6988 R 5.8 0.2 0:04.47 /usr/sbin/httpd 21112 apache 20 0 374m 38m 5956 R 2.6 0.1 0:01.61 /usr/sbin/httpd

    Read the article

  • saslauthd using too much memory

    - by Brian Armstrong
    Woke up today to see my site slow/unresponsive. Pulled up top and it looks like a ton of saslauthd processes have spun up using about 64m of RAM each, causing the machine to enter swap space. I've never seen this many used on there. top - 16:54:13 up 85 days, 11:48, 1 user, load average: 0.32, 0.50, 0.38 Tasks: 143 total, 1 running, 142 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 97.3%id, 0.2%wa, 0.0%hi, 0.0%si, 1.4%st Mem: 1048796k total, 1025904k used, 22892k free, 14032k buffers Swap: 2097144k total, 332460k used, 1764684k free, 194348k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 848 admin 20 0 263m 115m 4840 S 0 11.3 5:02.91 ruby 906 admin 20 0 265m 113m 4828 S 0 11.1 5:37.24 ruby 30484 admin 20 0 248m 91m 4256 S 6 9.0 219:02.30 delayed_job 4075 root 20 0 160m 65m 952 S 0 6.4 0:24.22 saslauthd 4080 root 20 0 162m 64m 936 S 0 6.3 0:24.48 saslauthd 4079 root 20 0 162m 64m 936 S 0 6.3 0:24.70 saslauthd 4078 root 20 0 164m 63m 936 S 0 6.2 0:24.66 saslauthd 4077 root 20 0 163m 62m 936 S 0 6.1 0:24.66 saslauthd 3718 mysql 20 0 312m 52m 3588 S 1 5.1 3499:40 mysqld 699 root 20 0 72744 7640 2164 S 0 0.7 0:00.50 ruby 15701 postfix 20 0 106m 5712 4164 S 1 0.5 0:00.50 smtpd 15702 postfix 20 0 52444 3252 2452 S 1 0.3 0:00.06 cleanup 4062 postfix 20 0 41884 3104 1788 S 0 0.3 125:26.01 qmgr 15683 root 20 0 51504 2780 2180 S 0 0.3 0:00.04 sshd 14595 postfix 20 0 52308 2548 2304 S 1 0.2 0:24.60 proxymap 15483 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.38 smtp 15486 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.36 smtp 15488 postfix 20 0 43380 2540 1992 S 0 0.2 0:00.38 smtp 15485 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.36 smtp 15489 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.40 smtp Wasn't sure what Saslauthd is, Google says it handles plantext authentication. The machine has been sending a lot of email through postfix, so this could be related. Anyone know why so many may have spun up? Are they safe to kill? Thanks!

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • High load without explanation

    - by Sebastian
    I have a very high load on my machine and don't know what is responsible or how to find out. On the machine runs a jboss appserver and mysql. Here is a top from the user at peak time: top - 16:23:01 up 101 days, 6:50, 1 user, load average: 23.42, 21.53, 24.73 Tasks: 9 total, 1 running, 8 sleeping, 0 stopped, 0 zombie Cpu(s): 17.2%us, 1.6%sy, 0.0%ni, 80.4%id, 0.1%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 16440784k total, 16263720k used, 177064k free, 151916k buffers Swap: 16780872k total, 30428k used, 16750444k free, 8963648k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27344 b 40 0 16.0g 6.5g 14m S 169 41.7 1184:09 java 6047 b 40 0 11484 1232 1228 S 0 0.0 0:00.01 mysqld_safe 6192 b 40 0 604m 182m 4696 S 0 1.1 93:30.40 mysqld 7948 b 40 0 84036 1968 1176 S 0 0.0 0:00.07 sshd 7949 b 40 0 14004 2900 1608 S 0 0.0 0:00.03 bash 7975 b 40 0 8604 1044 840 S 0 0.0 0:00.44 top The CPU usage of the java process is normal. The peaks only show up when i deployed a certain web application. Could the resulting network traffic boost the load in such way that i don't see it in top?

    Read the article

  • Nice level not working on linux

    - by xioxox
    I have some highly floating point intensive processes doing very little I/O. One is called "xspec", which calculates a numerical model and returns a floating point result back to a master process every second (via stdout). It is niced at the 19 level. I have another simple process "cpufloattest" which just does numerical computations in a tight loop. It is not niced. I have a 4-core i7 system with hyperthreading disabled. I have started 4 of each type of process. Why is the Linux scheduler (Linux 3.4.2) not properly limiting the CPU time taken up by the niced processes? Cpu(s): 56.2%us, 1.0%sy, 41.8%ni, 0.0%id, 0.0%wa, 0.9%hi, 0.1%si, 0.0%st Mem: 12297620k total, 12147472k used, 150148k free, 831564k buffers Swap: 2104508k total, 71172k used, 2033336k free, 4753956k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32399 jss 20 0 44728 32m 772 R 62.7 0.3 4:17.93 cpufloattest 32400 jss 20 0 44728 32m 744 R 53.1 0.3 4:14.17 cpufloattest 32402 jss 20 0 44728 32m 744 R 51.1 0.3 4:14.09 cpufloattest 32398 jss 20 0 44728 32m 744 R 48.8 0.3 4:15.44 cpufloattest 3989 jss 39 19 1725m 690m 7744 R 44.1 5.8 1459:59 xspec 3981 jss 39 19 1725m 689m 7744 R 42.1 5.7 1459:34 xspec 3985 jss 39 19 1725m 689m 7744 R 42.1 5.7 1460:51 xspec 3993 jss 39 19 1725m 691m 7744 R 38.8 5.8 1458:24 xspec The scheduler does what I expect if I start 8 of the cpufloattest processes, with 4 of them niced (i.e. 4 with most of the CPU, and 4 with very little)

    Read the article

  • Find out which task is generating a lot of context switches on linux

    - by Gaks
    According to vmstat, my Linux server (2xCore2 Duo 2.5 GHz) is constantly doing around 20k context switches per second. # vmstat 3 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 7292 249472 82340 2291972 0 0 0 0 0 0 7 13 79 0 0 0 7292 251808 82344 2291968 0 0 0 184 24 20090 1 1 99 0 0 0 7292 251876 82344 2291968 0 0 0 83 17 20157 1 0 99 0 0 0 7292 251876 82344 2291968 0 0 0 73 12 20116 1 0 99 0 ... but uptime shows small load: load average: 0.01, 0.02, 0.01 and top doesn't show any process with high %CPU usage. How do I find out what exactly is generating those context switches? Which process/thread? I tried to analyze pidstat output: # pidstat -w 10 1 12:39:13 PID cswch/s nvcswch/s Command 12:39:23 1 0.20 0.00 init 12:39:23 4 0.20 0.00 ksoftirqd/0 12:39:23 7 1.60 0.00 events/0 12:39:23 8 1.50 0.00 events/1 12:39:23 89 0.50 0.00 kblockd/0 12:39:23 90 0.30 0.00 kblockd/1 12:39:23 995 0.40 0.00 kirqd 12:39:23 997 0.60 0.00 kjournald 12:39:23 1146 0.20 0.00 svscan 12:39:23 2162 5.00 0.00 kjournald 12:39:23 2526 0.20 2.00 postgres 12:39:23 2530 1.00 0.30 postgres 12:39:23 2534 5.00 3.20 postgres 12:39:23 2536 1.40 1.70 postgres 12:39:23 12061 10.59 0.90 postgres 12:39:23 14442 1.50 2.20 postgres 12:39:23 15416 0.20 0.00 monitor 12:39:23 17289 0.10 0.00 syslogd 12:39:23 21776 0.40 0.30 postgres 12:39:23 23638 0.10 0.00 screen 12:39:23 25153 1.00 0.00 sshd 12:39:23 25185 86.61 0.00 daemon1 12:39:23 25190 12.19 35.86 postgres 12:39:23 25295 2.00 0.00 screen 12:39:23 25743 9.99 0.00 daemon2 12:39:23 25747 1.10 3.00 postgres 12:39:23 26968 5.09 0.80 postgres 12:39:23 26969 5.00 0.00 postgres 12:39:23 26970 1.10 0.20 postgres 12:39:23 26971 17.98 1.80 postgres 12:39:23 27607 0.90 0.40 postgres 12:39:23 29338 4.30 0.00 screen 12:39:23 31247 4.10 23.58 postgres 12:39:23 31249 82.92 34.77 postgres 12:39:23 31484 0.20 0.00 pdflush 12:39:23 32097 0.10 0.00 pidstat Looks like some postgresql tasks are doing 10 context swiches per second, but it doesn't all sum up to 20k anyway. Any idea how to dig a little deeper for an answer?

    Read the article

  • Server load high, CPU idle. NFS the cause?

    - by Mech Software
    I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0 0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0 0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0 0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0 6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0 4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0 4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0 3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0 3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0 5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0 0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0 0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0 4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0 3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0 3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0 From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?

    Read the article

  • centos TCP/IP connection very slow

    - by yuli chika
    I have a VSP (centos6.1 64bit) with 4gb ram. It always runs well, but in recent few days, the server become slowly. open a small css file need 22 seconds(2kb). tested in home/office/phone with (IE,chrome,safari,firefox). see in firebug networking DNS Lookup ?4?ms Connecting ?21.18?s Sending 1?ms Waiting ?115?ms Receiving ?9?ms The connection cost 21.18 seconds I have checked all the log file, there have no error. top commond, still have free memory. top - 00:23:15 up 8 days, 3:57, 1 user, load average: 3.60, 3.42, 3.83 Tasks: 221 total, 4 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 19.3%us, 3.2%sy, 0.0%ni, 76.1%id, 1.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4194304k total, 3247724k used, 946580k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32357 mysql 15 0 3710m 835m 6268 S 34.5 20.4 39:14.40 mysqld 9780 apache 15 0 442m 59m 12m S 33.2 1.4 0:05.69 httpd 9842 apache 15 0 403m 26m 10m S 16.9 0.7 0:01.23 httpd 9847 apache 15 0 412m 45m 22m R 15.3 1.1 0:01.00 httpd 9834 apache 15 0 426m 46m 11m R 13.0 1.1 0:02.22 httpd 9891 apache 15 0 407m 43m 19m S 8.0 1.1 0:00.33 httpd 9845 apache 15 0 414m 51m 24m S 6.0 1.3 0:01.53 httpd 9827 apache 15 0 402m 28m 11m S 3.3 0.7 0:02.69 httpd 9768 apache 16 0 414m 51m 24m S 3.0 1.3 0:06.51 httpd 9889 root 15 0 211m 12m 8160 S 2.7 0.3 0:00.32 php 9702 apache 15 0 415m 55m 26m S 1.7 1.4 0:10.67 httpd 9844 apache 15 0 413m 47m 21m S 1.7 1.2 0:01.21 httpd 9697 apache 15 0 414m 51m 24m S 1.3 1.3 0:11.05 httpd 9778 apache 15 0 414m 53m 25m S 1.3 1.3 0:05.38 httpd 9772 apache 15 0 414m 51m 23m R 0.7 1.3 0:05.04 httpd 9823 apache 15 0 415m 50m 23m S 0.7 1.2 0:03.97 httpd 9837 apache 15 0 402m 27m 11m S 0.3 0.7 0:01.04 httpd Then, how to check where is the problem and fixed it? I haven't change and config files in these days. Thanks.

    Read the article

  • Opening an existing process

    - by Grasper
    I am using Eclipse in Linux through a remote connection (xrdp). My internet got disconnected, so I got disconnected from the server while eclipse was running. Now I logged in again, and I do the "top" command I can see that eclipse is running and still under my user name. Is there some way I can bring that process back into my view (I do not want to kill it because I am in the middle of checking in a large swath of code)? It doesnt show up on the bottom panel after I logged in again. Here is the "top" output: /home/mclouti% top top - 08:32:31 up 43 days, 13:06, 29 users, load average: 0.56, 0.79, 0.82 Tasks: 447 total, 1 running, 446 sleeping, 0 stopped, 0 zombie Cpu(s): 6.0%us, 0.7%sy, 0.0%ni, 92.1%id, 1.1%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 3107364k total, 2975852k used, 131512k free, 35756k buffers Swap: 2031608k total, 59860k used, 1971748k free, 817816k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13415 mclouti 15 0 964m 333m 31m S 21.2 11.0 83:12.96 eclipse 16040 mclouti 15 0 2608 1348 888 R 0.7 0.0 0:00.12 top 31395 mclouti 15 0 29072 20m 8524 S 0.7 0.7 611:08.08 Xvnc 2583 root 20 0 898m 2652 1056 S 0.3 0.1 139:26.82 automount 28990 postgres 15 0 13564 868 304 S 0.3 0.0 26:33.36 postgres 28995 postgres 16 0 13808 1248 300 S 0.3 0.0 6:54.95 postgres 31440 mclouti 15 0 3072 1592 1036 S 0.3 0.1 6:01.54 gam_server 1 root 15 0 2072 524 496 S 0.0 0.0 0:03.00 init 2 root RT -5 0 0 0 S 0.0 0.0 0:04.53 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:01.72 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:04.33 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/2

    Read the article

  • High apache load but zero traffic

    - by Adie
    I have a problem with new server.. I use VPS Centos with 1GB of ram and I use wordpress CMS. The traffic <100 visitor/hour, but the apache have high load and make the server hang with zero free of ram and can't connect through ssh. I should reboot the vps to make it works here is the load on Apache looks like Tasks: 66 total, 1 running, 65 sleeping, 0 stopped, 0 zombie Cpu(s): 1.6%us, 12.3%sy, 0.0%ni, 48.1%id, 23.0%wa, 4.8%hi, 10.2%si, 0.0% Mem: 1018776k total, 116620k used, 902156k free, 1236k buffers Swap: 1048568k total, 1013052k used, 35516k free, 26628k cached 2949 apache 20 0 459m 42m 3732 D 3.0 4.2 0:09.23 httpd 2959 apache 20 0 460m 29m 3744 D 2.0 3.0 0:02.72 httpd 2968 apache 20 0 460m 26m 3808 D 2.0 2.6 0:02.27 httpd 2972 apache 20 0 460m 24m 3784 D 2.0 2.5 0:02.44 httpd 2986 apache 20 0 460m 29m 3784 R 2.0 2.9 0:02.40 httpd 2969 apache 20 0 458m 29m 3864 D 1.6 3.0 0:02.63 httpd 2974 apache 20 0 460m 25m 3820 D 1.6 2.6 0:02.43 httpd 2990 apache 20 0 460m 23m 3920 D 1.6 2.4 0:02.36 httpd 2994 apache 20 0 460m 31m 3756 D 1.6 3.2 0:02.62 httpd 2956 apache 20 0 460m 26m 3740 D 1.3 2.7 0:02.73 httpd 2957 apache 20 0 465m 22m 3644 D 1.3 2.3 0:02.80 httpd 2967 apache 20 0 458m 24m 3764 D 1.3 2.5 0:02.60 httpd 2970 apache 20 0 463m 25m 3764 D 1.3 2.6 0:03.07 httpd 2971 apache 20 0 451m 22m 3792 D 1.3 2.3 0:02.47 httpd 2973 apache 20 0 458m 25m 3768 D 1.3 2.6 0:02.52 httpd 2987 apache 20 0 465m 20m 3772 D 1.3 2.1 0:03.02 httpd But sometimes the server have uptime more than 5-10hrs but after that the problems start

    Read the article

  • KVM and JBoss Java Application Server

    - by Jason
    We have a large Xen deployment running on both RHEL and CentOS and have recently started looking at KVM since this is where it looks like the future of VM's are on Linux. We can load the server and get everything running without an issue. However when we load up a new one with JBoss (4.2 Community edition, Sun JDK 6) and load a large EAR the server goes a little crazy. The %sy will jump to 80-99% and just hang for large periods of time we see a similar jump in %us on the host machine. We though at first this might be I/O as it seems to happen at start of JBoss but then would "cool down" after everything got loaded. We did some tests by extracting some large tar.gz files and using jar -xvf on the ear but could not re-create. Then we starting thinking this might be some type of memory access issues. We loaded a c-program that would generate a lot of memory activity and sure enough we saw the spikes again. Not as high mind you but we did see it jump. We then wrote a small java program to do the same thing and sure enough we saw it jump again. Any thoughts on what might be causing this? Is this just the way KVM works? As a side note we do NOT see this behavior on any other setup. Xen, VMWare and/or native iron. The system does seem a bit slower than our Xen / VMware ones.

    Read the article

  • 0% CPU in top for all processes, but load average > 1

    - by chrisdew
    On two different servers (with Ubuntu 12.04LTS AMD64) I have seen the following behaviour: op - 10:50:05 up 305 days, 21:17, 1 user, load average: 1.94, 2.52, 2.97 Tasks: 141 total, 2 running, 139 sleeping, 0 stopped, 0 zombie Cpu(s): 41.5%us, 6.5%sy, 0.0%ni, 51.8%id, 0.0%wa, 0.2%hi, 0.1%si, 0.0%st Mem: 8178432k total, 5753740k used, 2424692k free, 159480k buffers Swap: 15625208k total, 0k used, 15625208k free, 4905292k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 23928 2072 1216 S 0 0.0 0:56.42 init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:01.23 migration/0 4 root 20 0 0 0 0 S 0 0.0 2:39.82 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:02.99 migration/1 7 root 20 0 0 0 0 S 0 0.0 2:32.15 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT 0 0 0 0 S 0 0.0 0:11.67 migration/2 10 root 20 0 0 0 0 S 0 0.0 29:00.34 ksoftirqd/2 The server is working fine, but top shows all processes as using 0% CPU. A reboot fixed this on an earlier machine, but I haven't yet tried it on this one. I have tried top several times, and so am sure that I haven't accidentally pressed '<' or '' to sort by a different column. Sorting the process list by all of the available columns, stills shows 0% CPU for all displayed processes. What is going on? If this a kernel bug? Update: If I use top -p <PID> for a know, busy process, top still displays 0% CPU for that process.

    Read the article

  • Apache process consumes too much CPU

    - by Niro
    I have an ubuntu apache/php server running php doing appx 100 hits/sec and a PHP cron running in the background. I get occasionally high CPU load on one of the Apache processes which stays high regardless of traffic or cron activity. It seems to me that its stuck in some kind of loop or something. Below you will find the top and strace info. How can I find where the bad code is and what causes this? top - 14:45:24 up 3 days, 3:38, 1 user, load average: 5.10, 5.88, 5.85 Tasks: 163 total, 5 running, 158 sleeping, 0 stopped, 0 zombie Cpu(s): 47.8%us, 18.5%sy, 0.0%ni, 10.2%id, 0.0%wa, 0.0%hi, 1.8%si, 21.6%st Mem: 7885012k total, 3858484k used, 4026528k free, 177444k buffers Swap: 0k total, 0k used, 0k free, 1037868k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10736 www-data 20 0 769m 559m 478m R 69 7.3 29:08.30 apache2 10844 www-data 20 0 824m 601m 492m S 17 7.8 4:37.90 apache2 1016 root 20 0 242m 25m 4628 S 6 0.3 162:07.93 scalarizr 9030 www-data 20 0 879m 619m 492m S 4 8.0 5:06.82 apache2 20216 www-data 20 0 747m 228m 170m S 4 3.0 0:01.94 apache2 10807 www-data 20 0 814m 584m 492m S 3 7.6 4:54.10 apache2 10455 www-data 20 0 831m 574m 492m S 3 7.5 4:32.65 apache2 10495 www-data 20 0 849m 592m 492m S 3 7.7 4:41.10 apache2 10884 www-data 20 0 840m 581m 492m S 3 7.6 4:25.06 apache2 ^CProcess 10736 detached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 74.55 0.148052 1 109755 gettimeofday 25.36 0.050370 0 164634 clock_gettime 0.09 0.000178 0 54878 poll ------ ----------- ----------- --------- --------- ---------------- 100.00 0.198600 329267 total root@ec2-67-202-54-36:~# ^C

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • High CPU usage in my digitalocean droplet

    - by Ibrahim Azhar Armar
    I am experiencing high CPU usage here is the stats i got from server, the consumption after every restart in 15 minutes go upto 100%, what could go wrong? I have a wordpress copy installed on the server which does not have much traffic, here is the stats that i got from using top command in server. top - 11:46:02 up 12 min, 3 users, load average: 40.89, 16.03, 6.11 Tasks: 132 total, 41 running, 91 sleeping, 0 stopped, 0 zombie Cpu(s): 24.3%us, 61.5%sy, 0.0%ni, 0.0%id, 4.0%wa, 0.0%hi, 0.0%si, 10.2%st Mem: 2050896k total, 1988656k used, 62240k free, 284k buffers Swap: 0k total, 0k used, 0k free, 4712k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31 root 20 0 0 0 0 R 39 0.0 1:35.53 kswapd0 899 root 20 0 15988 172 0 S 14 0.0 0:05.00 irqbalance 418 syslog 20 0 243m 600 0 S 13 0.0 0:06.85 rsyslogd 944 mysql 20 0 1320m 53m 0 S 12 2.7 0:21.15 mysqld 2357 root 20 0 17344 532 164 R 11 0.0 0:14.27 top 960 root 20 0 246m 3816 0 S 3 0.2 0:08.18 php5-fpm 2431 www-data 20 0 344m 64m 908 R 2 3.2 0:04.23 apache2 2435 www-data 20 0 304m 63m 836 R 2 3.2 0:03.43 apache2 2413 www-data 20 0 349m 63m 920 R 2 3.2 0:07.51 apache2 2465 www-data 20 0 349m 64m 944 R 2 3.2 0:05.04 apache2 2518 www-data 20 0 307m 41m 1204 R 2 2.1 0:01.37 apache2 2406 www-data 20 0 346m 56m 1144 R 2 2.8 0:03.76 apache2 2456 www-data 20 0 345m 55m 1184 R 2 2.8 0:02.67 apache2 2373 www-data 20 0 351m 63m 784 R 2 3.2 0:11.09 apache2 2439 www-data 20 0 306m 35m 916 R 2 1.8 0:02.51 apache2 2450 www-data 20 0 345m 55m 1088 R 2 2.8 0:02.96 apache2 2486 www-data 20 0 299m 10m 876 R 2 0.5 0:01.19 apache2 2523 www-data 20 0 300m 27m 796 R 2 1.4 0:00.99 apache2

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. Edit Additional info.

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • "iostat" command different in two equal machines

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 8 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are - 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st. Mem is about 1.5GB free. Any idea what could it be, or where else can we check? iostat command on the proper machine: avg-cpu: %user %nice %system %iowait %steal %idle 8.97 0.03 4.46 0.19 0.14 86.23 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.60 0.69 55.38 587620 47254184 xvdfp2 2.64 1.10 61.04 934786 52091056 xvdfp4 0.86 0.19 41.72 163866 35601920 xvdfp1 4.37 36.59 73.89 31220810 63051504 xvdfp3 8.03 7.08 94.63 6045402 80749184 iostat command on problematic machine: avg-cpu: %user %nice %system %iowait %steal %idle 9.29 0.04 5.55 0.26 0.11 84.74 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.13 3.34 68.85 246244 5077888 xvdfp1 7.60 74.31 104.88 5480362 7734840 xvdfp3 13.22 73.67 125.00 5433386 9218600 xvdfp4 1.11 0.76 65.08 55762 4799248 xvdfp2 4.16 3.31 99.17 243818 7313264 Many thanks for the help.

    Read the article

  • High fan speed with no reason

    - by Klaus
    For a few weeks, the fans of my Lenovo B590 laptop, running on Xubuntu 14, turn to high speed a few minutes after it is turned on. The fans won't speed down until I turn the computer off. This is quite strange, since This didn't happen before The temperatures are quite low (are they ?) $sensors Adapter: Virtual device temp1: +36.0°C (crit = +88.0°C) temp2: +30.0°C (crit = +126.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +37.0°C (high = +72.0°C, crit = +90.0°C) Core 0: +34.0°C (high = +72.0°C, crit = +90.0°C) Core 1: +31.0°C (high = +72.0°C, crit = +90.0°C) thinkpad-isa-0000 Adapter: ISA adapter fan1: 0 RPM pkg-temp-0-virtual-0 Adapter: Virtual device temp1: +37.0°C $sudo hddtemp /dev/sda /dev/sda: ST500LT012-9WS142: 33°C The computer is under low load: top - 08:30:15 up 16 min, 2 users, load average: 0.28, 0.23, 0.23 Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.8 us, 0.5 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3607944 total, 1973956 used, 1633988 free, 99660 buffers KiB Swap: 3744764 total, 0 used, 3744764 free. 789936 cached Mem The BIOS is up to date (and there are no fan settings in it) The fan is clean and dust-free Why would the BIOS turn the fans to high speed where there seem to be no reason for that ? It seems that we cannot control the fan manually with this model, so I guess the only solution is to understand why this happens.

    Read the article

  • Massive number of context switches on ksoftirqd

    - by Pace
    We have two servers that are grinding to a halt. One is a VM and the other is bare metal. Neither of them are running similar code but they are on the same network. It appears that an incredible number of context switches are arising from ksoftirqd (which is taking up a lot of CPU). vmstat output procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 605092 182496 2637556 0 0 0 0 4177 519187 8 19 73 0 0 2 0 0 605092 182496 2637556 0 0 0 0 4792 520980 8 19 74 0 0 3 0 0 605092 182496 2637552 0 0 0 0 2137 659640 18 26 56 0 0 ... pidstat output TCK4-BM-06A:~ # pidstat -w -I 5 Linux 2.6.32.12-0.7-default (TCK4-BM-06A) 07/02/2012 _x86_64_ 03:03:01 PM PID cswch/s nvcswch/s Command 03:03:06 PM 1 0.20 0.00 init 03:03:06 PM 4 386666.27 0.00 ksoftirqd/0 03:03:06 PM 6 0.60 0.00 ksoftirqd/1 03:03:06 PM 8 378213.17 0.00 ksoftirqd/2 03:03:06 PM 10 0.20 0.00 ksoftirqd/3 03:03:06 PM 12 0.20 0.00 ksoftirqd/4 03:03:06 PM 26 377115.37 0.00 ksoftirqd/11 03:03:06 PM 27 1.80 0.00 events/0 03:03:06 PM 28 1.00 0.00 events/1 03:03:06 PM 29 1.00 0.00 events/2 03:03:06 PM 30 1.00 0.00 events/3 03:03:06 PM 31 0.80 0.00 events/4 03:03:06 PM 32 0.80 0.00 events/5 ... My initial thought is that, since both are on the same network, something is flooding the network. Is this consistent with the data?

    Read the article

  • ubuntu's average load never below "0.00 0.01 0.05"

    - by Karma Fusebox
    I have several ubuntu 12.04 VMs running on a ubuntu 12.04 KVM host. Those of the virtual machines that are totally idle with no services (except syslog and the other "small" standard stuff of a fresh installation) show a constant load of "0.00 0.01 0.05" in top/htop as average 1/5/15. When there are "real" applications running, the load averages behave perfectly normal but they never fall below the mentioned values. While this doesn't affect performance at all and could easily be ignored, it screws up the monitoring graphs in a very annoying way: (Notice how load15 behaves nicely if 0.05 for a short time in the right half of the pic) Unfortunately I don't know what diagnostic outputs might be helpful for you, so here's some default stuff: # top top - 16:31:01 up 1:05, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019464k total, 73452k used, 946012k free, 6140k buffers Swap: 0k total, 0k used, 0k free, 22504k cached . # free -m total used free shared buffers cached Mem: 995 72 923 0 6 21 -/+ buffers/cache: 43 951 Swap: 0 0 0 . # iostat -x /dev/vda Linux 3.2.0-32-virtual (vm3) 11/15/2012 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.65 0.20 0.24 98.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.14 0.12 0.51 0.22 6.74 1.46 22.50 0.02 23.26 20.64 29.30 7.63 0.56 Need something else? Has anyone ever seen this behavior? Might this be a bug in kvm/ubuntu/kernel 3.x in the end? Thanks a lot!

    Read the article

  • Linux 2.6.24-gentoo-r3-comtrance on x86_64 high Useage for unknown reasons

    - by Dorjan
    Hello everyone, I'm a complete rookie when it comes to all things Linux related so please treat me as such and assume I know nothing. That being said my Top says this: top - 12:08:03 up 11 days, 15:36, 0 users, load average: 5.47, 5.53, 5.46 Tasks: 296 total, 2 running, 294 sleeping, 0 stopped, 0 zombie Cpu(s): 6.3%us, 1.4%sy, 0.0%ni, 71.3%id, 20.6%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 8176880k total, 8118236k used, 58644k free, 89312k buffers Swap: 1004052k total, 0k used, 1004052k free, 7235652k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1229 root 15 -5 0 0 0 D 1 0.0 199:28.63 kjournald 2946 root 20 0 1716 676 552 D 1 0.0 145:02.94 syslogd 14553 root 20 0 2644 1268 876 R 1 0.0 0:00.34 top 14609 postfix 20 0 7896 1884 1460 D 1 0.0 0:00.02 bounce 14630 postfix 20 0 7896 1876 1452 R 0 0.0 0:00.00 bounce And my hard drives says: > df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 4925556 4474836 200508 96% / /dev/sda5 489992 36090 428602 8% /tmp /dev/sda6 377951852 236171160 122581816 66% /var none 4088440 0 4088440 0% /dev/shm It has been like it for a few days now... I know not what is causing the high server load (Normally around 1.3) can anyone give any tips on how to track down the culprit? Many thanks,

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9  | Next Page >