Search Results

Search found 329 results on 14 pages for 'ni'.

Page 8/14 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Need help trouble shooting high CPU usage by PHP-fpm

    - by user432506
    There is a problem that is driving me crazy. After the day I tried to fix the CPU usage problem of my VPS, the CPU load has grown from 60% to 150%, and I have no idea what causes the problem. Please help me. I had installed a copy of mediawiki on a Linode 1024. The wiki is running on Niginx + PHP-fpm + MySql. The wiki doesn't have much traffic, only around 4000 requests/day, mostly from Google and Bing bots. It had been using around 60% (of total 400% on the Linode) of the CPU before. I thought it was a bit high, so two day ago, I was trying to fix the problem (not knowing what was waiting for me). I did nothing but added a new empty line to wiki's configure file, which would change the modified time of the configure file, and then all the cached page files would be set invalidated. I had done that before, and that would cause high CPU usage, but normally it would take only hours to let things back to normal again. Not this time, my CPU usage has been around 150% for more than two days. It is php-fpm using most of CPU reassures. Using 100% of three cores is not rare. I hadn't seen that before. There are other sites on the Linode, but it should be the wiki. Because if I offline the wiki, CPU usage will drop back to around 40% soon. The day I also duplicated php-fpm.conf, and opened it, but didn't changed it. I have no idea what I did wrong. I here ask for help to save myself from being crazy!!! It is php-fpm. Is there a way to find out what is it doing? I mean like which scripts are related and what function codes are running? top: top - 06:34:33 up 10 days, 4:23, 2 users, load average: 1.10, 1.24, 1.37 Tasks: 76 total, 4 running, 72 sleeping, 0 stopped, 0 zombie Cpu(s): 61.1%us, 3.1%sy, 0.0%ni, 32.8%id, 2.9%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 1028684k total, 945192k used, 83492k free, 89580k buffers Swap: 524284k total, 18084k used, 506200k free, 530380k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26721 www-data 20 0 208m 54m 34m R 99 5.4 0:09.07 /opt/php5/sbin/php-fpm --fpm-config /opt/php5/etc/php-fpm.conf 26592 www-data 20 0 207m 45m 26m R 91 4.5 0:12.77 /opt/php5/sbin/php-fpm --fpm-config /opt/php5/etc/php-fpm.conf 26706 www-data 20 0 196m 43m 34m S 47 4.3 0:15.19 /opt/php5/sbin/php-fpm --fpm-config /opt/php5/etc/php-fpm.conf 26583 www-data 20 0 197m 45m 35m S 33 4.5 0:19.08 /opt/php5/sbin/php-fpm --fpm-config /opt/php5/etc/php-fpm.conf 26787 www-data 20 0 206m 36m 18m R 25 3.7 0:00.41 /opt/php5/sbin/php-fpm --fpm-config /opt/php5/etc/php-fpm.conf 26661 www-data 20 0 207m 46m 26m S 13 4.6 0:19.87 /opt/php5/sbin/php-fpm --fpm-config /opt/php5/etc/php-fpm.conf 1971 mysql 20 0 155m 57m 3952 S 8 5.7 383:57.81 /usr/sbin/mysqld 242 root 20 0 0 0 0 S 1 0.0 0:51.36 [kworker/3:1] 5711 root 20 0 139m 95m 580 S 1 9.5 0:41.30 /usr/local/bin/memcached -d -u root -m 128 -p 11211 19463 root 20 0 190m 3984 1284 S 1 0.4 0:02.66 /opt/php5/sbin/php-fpm --fpm-config /opt/php5/etc/php-fpm.conf 29100 www-data 20 0 10928 5540 820 S 1 0.5 4:49.05 nginx: worker process vmstat 30 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 16912 81456 90784 554172 0 0 4 6 3 2 11 1 87 1 0 0 16912 78036 91000 555356 0 0 38 34 1397 375 12 1 87 0 4 0 16912 31776 91528 557508 0 0 78 42 3197 487 45 1 52 1 1 0 16912 83356 91768 558576 0 0 35 56 2608 449 32 1 67 1 1 0 16912 81548 92040 559720 0 0 41 31 1243 432 8 1 91 1 2 0 16912 53056 92332 562744 0 0 105 33 2013 581 17 1 81 1 2 0 16912 73236 92552 564844 0 0 68 36 1968 615 16 1 82 1 0 0 16912 91612 92904 566676 0 0 69 35 1845 692 13 1 85 1 1 0 16912 71248 93180 568428 0 0 58 33 1952 604 15 1 82 1 1 0 16868 55952 93516 572660 1 0 144 42 1801 637 12 1 86 1 2 0 16868 48324 94416 577844 0 0 189 66 2058 702 17 1 80 2 1 0 16928 58644 94592 578184 0 2 160 49 2578 723 25 1 70 3 5 0 16928 22600 94980 580568 0 0 89 32 1496 361 13 0 85 1 0 0 16988 49256 94500 576396 0 2 41 37 1601 426 14 1 85 0 5 0 18084 24336 86032 502748 0 37 83 68 2989 562 42 1 56 0 1 0 18084 123604 86376 506996 0 0 118 41 2201 573 22 1 76 1 2 0 18084 126984 86752 508876 0 0 64 53 1620 490 13 1 85 1 2 0 18084 103104 87148 510768 0 0 71 37 2757 602 33 1 64 1

    Read the article

  • How is htop "Swp" calculated?

    - by Thomas
    When I run htop (on OS X 10.6.8), I see something like this : 1 [||||||| 20.0%] Tasks: 70 total, 0 running 2 [||| 7.2%] Load average: 1.11 0.79 0.64 3 [|||||||||||||||||||||||||||81.3%] Uptime: 00:30:42 4 [|| 5.8%] Mem[|||||||||||||||||||||3872/4096MB] Swp[ 0/0MB] PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 284 501 57 0 15.3G 1064M 0 S 0.0 6.5 0:01.26 /Applications/Firefox.app/Contents/MacOS/firefox -psn_0_90134 437 501 57 0 14.8G 785M 0 S 0.0 4.8 0:00.18 /Applications/Thunderbird.app/Contents/MacOS/thunderbird -psn_0_114716 428 501 63 0 12.8G 351M 0 S 1.0 2.1 0:00.51 /Applications/Firefox.app/Contents/MacOS/plugin-container.app/Contents/MacOS/ 696 501 63 0 11.7G 175M 0 S 0.0 1.1 0:00.02 /System/Library/Frameworks/QuickLook.framework/Resources/quicklookd.app/Conte 38 0 33 0 11.1G 422M 0 S 0.0 2.6 0:00.59 /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framewo 183 501 48 0 10.9G 137M 0 S 0.0 0.8 0:00.03 /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder How can I have Processes using Gigabytes of VIRT memory and still 0MB of Swap used ?

    Read the article

  • What does %st mean in top?

    - by Ben
    Here is an example from my top: Cpu(s): 6.0%us, 3.0%sy, 0.0%ni, 78.7%id, 0.0%wa, 0.0%hi, 0.3%si, 12.0%st I am trying to figure out the significance of the %st field. I read that it means steal cpu and it represents time spent by the hypervisor, but I want to know what that actually means to me. Does it mean I may be on a busy physical server and someone else is using too much CPU on the server and they are taking from my VM? If I am using EBS could it be related to handling EBS I/O at the hypervisor level? Is it related to things running on my VM or is it completely unaffected by me?

    Read the article

  • xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory)

    - by mazgalici
    root@mazgalici:~# startx X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-28-server i686 Ubuntu Current Operating System: Linux mazgalici 2.6.18-194.26.1.el5.028stab079.2PAE #1 SMP Fri Dec 17 19:34:22 MSK 2010 i686 Kernel command line: quiet Build Date: 10 November 2010 11:25:26AM xorg-server 2:1.7.6-2ubuntu7.4 (For technical support please see ) Current version of pixman: 0.16.4 Before reporting problems, check to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jan 11 01:28:48 2011 (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory) Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log

    Read the article

  • What and why is my swap space used under linux

    - by Fabian
    on my linux system I get these stats from top: Tasks: 155 total, 1 running, 153 sleeping, 0 stopped, 1 zombie Cpu(s): 1.5%us, 0.3%sy, 0.0%ni, 97.4%id, 0.7%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8177180k total, 2025504k used, 6151676k free, 44176k buffers Swap: 7999996k total, 495300k used, 7504696k free, 637612k cached There it shows me that my system is using 495Mb of swap. Why is this so? 6Gigs of ram are free. And if I would disable swap entirely the system would also work. Any explanation what the number really shows or who is swapping?

    Read the article

  • Postfix SMTP server down on Ubuntu

    - by Paddington
    I have a Plesk server running Postfix on Ubuntu 10.04 and the SMTP service on port 25 is down. When I stop and then start postfix the server comes up only for a minute and goes down again. I have checked the load on the server and it is low as shown: *top - 04:29:33 up 19 days, 3:25, 4 users, load average: 1.47, 1.78, 2.34 Tasks: 936 total, 1 running, 935 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 86.6%id, 11.7%wa, 0.6%hi, 0.1%si, 0.0%st Mem: 6110496k total, 6072988k used, 37508k free, 251244k buffers Swap: 12000544k total, 95264k used, 11905280k free, 4370432k cached* IMAP clients are not experiencing a problem and there are no issues with receiving emails for both POP or IMAP. Only SMTP (port 25) is a problem. If I ask clients to use the submission port (587) messages are delivered. netstat -lnt shows the following results , so its not a port issue. tcp 0 0 0.0.0.0:25 0.0.0.0: LISTEN tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN*

    Read the article

  • What does the suffix 'w' and 'd' mean with 'TIME+' in top?

    - by ssapkota
    Here's a chunk of the top from my server: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18878 www-data 20 0 200m 13m 4704 S 0 0.2 0:00.07 apache2 12374 root 20 0 197m 9460 4480 S 0 0.1 21212906w apache2 9136 root 20 0 79100 3488 2716 S 0 0.0 54518724d sshd I know the TIME+ means the total CPU time the task has used since it started. But in the above output, I simply couldn't understand what 21212906w and 54518724d mean? some considerable no of processes are showing the TIME+ with w and d prefixed. What does this mean? Is the server in trouble? Just to let you know - the server uptime is 4days. EDIT: - I can guess these refer week and days. If so why is it so large considering the uptime? - The server has 8 cores.

    Read the article

  • Solaris Non global Zone x server

    - by ankimal
    I am not even sure if this is possible but how can I start an X server on a non-global zone? If I run startx from within my zone. I created the xorg.conf by running /usr/X11/bin/xorgconfig root@foo:/usr/X11/bin# startx xauth: creating new authority file /root/.serverauth.20957 X.Org X Server 1.5.3 Release Date: 5 November 2008 X Protocol Version 11, Revision 0 Build Operating System: SunOS 5.11 snv_108 i86pc Current Operating System: SunOS dsol101 5.11 snv_111b i86pc Build Date: 07 May 2009 04:44:56PM Solaris ABI: 64-bit SUNWxorg-server package version: 6.9.0.5.11.11100,REV=0.2009.05.07 SUNWxorg-mesa package version: 6.9.0.5.11.11100,REV=0.2009.04.02 Before reporting problems, check http://sunsolve.sun.com/ to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 10 19:17:53 2009 (==) Using config file: "/etc/X11/xorg.conf" Fatal server error: xf86OpenConsole: Cannot open /dev/fb (No such file or directory)

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • nginx+php-fpm help optimize configs

    - by Dmitro
    I have 3 servers. First server (CPU - model name: 06/17, 2.66GHz, 4 cores, 8GB RAM) have nginx as load balancer with next config upstream lb_mydomain { server mydomain.ru:81 weight=2; server 66.0.0.18 weight=6; } server { listen 80; server_name ~(?!mydomain.ru)(.*); client_max_body_size 20m; location / { proxy_pass http://lb_mydomain; proxy_redirect off; proxy_set_header Connection close; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Set-Cookie; proxy_pass_header P3P; proxy_pass_header Content-Type; proxy_pass_header Content-Disposition; proxy_pass_header Content-Length; } } And configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 80 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 pm.status_path = /status ping.path = /ping ;ping.response = pong request_terminate_timeout = 30s request_slowlog_timeout = 10s slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 20 php-fpm processes which use from 1% - 15% CPU. So it's have high load averadge: top - 15:36:22 up 34 days, 20:54, 1 user, load average: 5.98, 7.75, 8.78 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 34.1%us, 3.2%sy, 0.0%ni, 37.0%id, 24.8%wa, 0.0%hi, 0.9%si, 0.0%st Mem: 8183228k total, 7538584k used, 644644k free, 351136k buffers Swap: 9936892k total, 14636k used, 9922256k free, 990540k cached Second server(CPU - model name: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz, 8 cores, 8GB RAM). Nginx configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config of php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 50 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong ;request_terminate_timeout = 0 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 50 php-fpm processes which use from 10% - 25% CPU. So it's have high load averadge: top - 15:53:05 up 33 days, 1:15, 1 user, load average: 41.35, 40.28, 39.61 Tasks: 239 total, 40 running, 199 sleeping, 0 stopped, 0 zombie Cpu(s): 96.5%us, 3.1%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st Mem: 8185560k total, 7804224k used, 381336k free, 161648k buffers Swap: 19802108k total, 16k used, 19802092k free, 5068112k cached Third server is server with database postgresql. Also i try ab -n 50 -c 5 http://www.mydomain.ru/ And I get next info: Complete requests: 50 Failed requests: 48 (Connect: 0, Receive: 0, Length: 48, Exceptions: 0) Write errors: 0 Total transferred: 9271367 bytes HTML transferred: 9247767 bytes Requests per second: 1.02 [#/sec] (mean) Time per request: 4882.427 [ms] (mean) Time per request: 976.486 [ms] (mean, across all concurrent requests) Transfer rate: 185.44 [Kbytes/sec] received Please advise how can I make lower level of load average?

    Read the article

  • Diagnosing high CPU waiting

    - by Will
    I have a monitoring server that is running icinga/collectd/graphite with about 50 hosts. I have noticed high load/slugging performance on the box. If you take a look at top, you'll see: Cpu(s): 0.6%us, 0.2%sy, 0.0%ni, 7.6%id, 23.4%wa, 0.0%hi, 0.2%si, 0.0%st Notice the HUGE %wa value, which as far as I know means a network or disk bottleneck. ifconfig shows no dropping packets and there's not a ton of bandwidth going on, so that leaves disk issues, right? There's not a lot of disk writing going on either...iotop is reporting we're only writing a little over 1 MB per second and the RAID tool reports everything is A-OK and write caching is enabled. How do I go about trying to figure out how to fix this? UPDATE: iostat -x output is: avg-cpu: %user %nice %system %iowait %steal %idle 0.62 0.10 0.31 9.65 0.00 89.31 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.21 33.34 83.55 16.54 1599.94 399.07 19.97 43.21 416.98 3.71 37.13

    Read the article

  • Linux server apache httpd processes take i/o wait to close to 100% and lock down server

    - by user3682065
    For about 5 days now, and seemingly out of the blue, my linux server has started locking up from time to time. The pattern is always the same as far as I can tell from top and iotop commands around the time it starts happening: One or more httpd processes (usually one) hang and start using up 100% of CPU power, the %wa goes close to 100% and in the iotop I see several httpd processes with 99.99% in the IO column. I'm also running an SVN server on this machine through apache and the one way that I've been consistently able to reproduce this is to do an SVN commit of new files or an SVN update from the repository on this server (I am the only one using this SVN repository). This will always reproduce this scenario successfully, but until very recently I had no problems at all checking in/out of SVN. But sometimes it just happens for no detectable reason at all it seems. So it seems like there is some issue with my Apache that leads it to have processes use up a lot of read/write upon certain triggers. I was wondering if anyone could help me uncover that issue. EDIT: OK now it's happening again: This is top: [root@server ~]# top top - 10:56:54 up 2:59, 5 users, load average: 171.46, 70.35, 27.01 Tasks: 328 total, 2 running, 326 sleeping, 0 stopped, 0 zombie Cpu(s): 1.9%us, 2.0%sy, 0.0%ni, 0.0%id, 96.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2021144k total, 1968192k used, 52952k free, 2500k buffers Swap: 4194288k total, 2938584k used, 1255704k free, 39008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10390 apache 20 0 2774m 936m 6200 D 2.0 47.4 1:52.27 httpd 2149 root 20 0 927m 13m 1040 S 0.7 0.7 1:50.46 namecoind 11 root 20 0 0 0 0 R 0.3 0.0 0:30.10 events/0 23 root 20 0 0 0 0 S 0.3 0.0 0:17.88 kblockd/1 2049 root 20 0 382m 4932 2880 D 0.3 0.2 0:03.67 httpd 2144 root 20 0 1702m 69m 1164 S 0.3 3.5 5:19.68 bitcoind 6325 root 20 0 15164 1100 656 R 0.3 0.1 0:11.09 top 10311 apache 20 0 387m 9496 7320 D 0.3 0.5 0:01.89 httpd 10313 apache 20 0 391m 10m 7364 D 0.3 0.5 0:02.40 httpd 10466 apache 20 0 399m 12m 7392 D 0.3 0.7 0:02.41 httpd 10599 apache 20 0 391m 9324 7340 D 0.3 0.5 0:00.15 httpd 10628 apache 20 0 384m 7620 4052 D 0.3 0.4 0:00.01 httpd 10633 apache 20 0 384m 7048 3504 D 0.3 0.3 0:00.01 httpd 10634 apache 20 0 384m 8012 4048 D 0.3 0.4 0:00.02 httpd 10638 apache 20 0 400m 22m 9.8m D 0.3 1.1 0:01.93 httpd 10640 apache 20 0 385m 8288 4028 D 0.3 0.4 0:00.03 httpd 10641 apache 20 0 401m 21m 6376 D 0.3 1.1 0:01.45 httpd 10759 apache 20 0 385m 8816 3480 D 0.3 0.4 0:01.45 httpd 10773 apache 20 0 384m 8044 3464 D 0.3 0.4 0:00.02 httpd This is an iotop snapshot: Total DISK READ: 5.93 M/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 10732 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 58.48 % httpd 876 be/3 root 0.00 B/s 52.68 K/s 0.00 % 52.98 % [jbd2/dm-1-8] 10906 be/4 root 124.17 K/s 0.00 B/s 0.00 % 23.03 % sh -c [ -x /usr/local/psa/admin/sbin/backupmng ] && /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1 2156 be/4 root 206.94 K/s 0.00 B/s 0.00 % 21.15 % bitcoind 10904 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 18.94 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 10773 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 14.77 % httpd 10641 be/4 apache 15.05 K/s 0.00 B/s 0.00 % 11.57 % httpd 10399 be/4 apache 1057.29 K/s 0.00 B/s 43.16 % 10.56 % httpd 10682 be/4 sw-cp-se 158.03 K/s 0.00 B/s 0.00 % 7.45 % sw-engine-cgi -c /usr/local/psa/admin/conf/php.ini -d auto_prepend_file=auth.php3 -u psaadm 10774 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 6.53 % httpd 10624 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 5.53 % httpd 10356 be/4 apache 899.26 K/s 0.00 B/s 35.52 % 4.01 % httpd 10795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 3.93 % httpd 10804 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 3.08 % httpd 4379 be/4 root 2.89 M/s 0.00 B/s 99.99 % 0.00 % namecoind 10619 be/4 apache 462.80 K/s 0.00 B/s 7.80 % 0.00 % httpd 10636 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 0.00 % httpd 10716 be/4 mysql 105.35 K/s 0.00 B/s 5.92 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 1988 be/4 root 18.81 K/s 0.00 B/s 0.00 % 0.00 % spamd_full.sock I also ran lsof -p for pid 10390 which was way up top under the top command and this is the bottom line where I can sort of see what request this was and it says CLOSE_WAIT: httpd 10390 apache 34u IPv6 315879 0t0 TCP default-domain.com:https->crawl-66-249-65-91.googlebot.com:42907 (CLOSE_WAIT) I'm still not sure what exactly is causing this all to happen though? I killed that service but %wa and load average remain high, I also stopped mysqld and other services. It really only goes down once I stop httpd altogether, and even then I can't start it without finding remaining hanging httpd processes via "netstat -tulpn", killing those or doing "killall -9 httpd" and after waiting a while for it to cycle through all those then doing /etc/init.d/httpd start

    Read the article

  • my webserver with 16GB ram shows all RAM as used, but is it really, see the 'top'

    - by Alex
    I have some questions about my web server. Its a LAMP web server running centos 5.5 and php5, mysql5. The server gets hundreds (maybe thousand) of concurrent users during peak hours. I'm trying to optimize a little and understand "top". From what I can see: all 16GB of my ram have been used up? does that mean that my server needs more memory? My swap is only 2GB, should it be increased? usually during peak hours my server load average first number is about 2.5-3. What could I do to optimize the server so that the load average even during peak doesn't go above 1? In the past I was told a good working server should stay under 1 load, is this still true? Although even during load of 2.5-3, server pages and applications seem to load with pretty good speed. what should the memory size in php.ini be set to? top - 14:30:18 up 2 days, 12:41, 5 users, load average: 1.25, 1.74, 2.92 Tasks: 305 total, 2 running, 302 sleeping, 0 stopped, 1 zombie Cpu(s): 6.3%us, 0.9%sy, 0.0%ni, 92.5%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 16427200k total, 16111472k used, 315728k free, 3120316k buffers Swap: 2104496k total, 268k used, 2104228k free, 6216756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29080 apache 15 0 358m 36m 5192 S 20.2 0.2 2:08.40 httpd 29093 apache 18 0 357m 36m 5192 S 18.2 0.2 2:02.52 httpd 29079 apache 15 0 370m 49m 5832 S 10.0 0.3 2:32.14 httpd 1812 apache 15 0 370m 49m 5196 S 7.3 0.3 2:25.30 httpd 5204 apache 15 0 358m 36m 5168 S 5.3 0.2 0:59.28 httpd 29075 apache 15 0 370m 48m 5184 S 3.3 0.3 2:15.93 httpd 9712 apache 15 0 360m 38m 5180 S 3.0 0.2 0:54.81 httpd 29072 apache 16 0 358m 36m 5192 S 2.7 0.2 2:24.43 httpd 6310 apache 17 0 388m 67m 5180 S 2.3 0.4 0:58.85 httpd 8674 apache 15 0 343m 21m 4980 S 2.0 0.1 0:07.91 httpd 29085 apache 15 0 371m 49m 5224 S 2.0 0.3 2:16.86 httpd 29083 apache 15 0 370m 48m 5196 S 1.7 0.3 2:10.64 httpd 5575 apache 15 0 357m 36m 5228 S 1.3 0.2 0:53.78 httpd 29066 apache 15 0 379m 59m 5860 R 1.3 0.4 2:11.93 httpd 29078 apache 15 0 370m 48m 5188 S 1.3 0.3 2:14.52 httpd 29084 apache 15 0 370m 48m 5208 S 1.0 0.3 2:02.49 httpd 29089 apache 15 0 370m 48m 5188 S 1.0 0.3 2:27.61 httpd 29082 apache 15 0 390m 68m 5188 S 0.7 0.4 2:32.48 httpd 29984 apache 15 0 358m 36m 5228 S 0.7 0.2 2:08.32 httpd 3571 root 16 0 13400 1792 848 S 0.3 0.0 2:37.89 top 4419 mysql 15 0 668m 175m 7204 S 0.3 1.1 3:32.25 mysqld 28181 root 15 0 90460 3624 2680 S 0.3 0.0 0:17.60 sshd 29091 apache 15 0 390m 69m 5196 S 0.3 0.4 2:29.99 httpd 32476 root 15 0 12900 1320 848 R 0.3 0.0 0:06.46 top 1 root 15 0 10372 680 572 S 0.0 0.0 0:02.01 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.51 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:01.45 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.22 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root RT -5 0 0 0 S 0.0 0.0 0:00.19 migration/8 27 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/8 28 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/8 29 root RT -5 0 0 0 S 0.0 0.0 0:00.34 migration/9 30 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/9 31 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/9 32 root RT -5 0 0 0 S 0.0 0.0 0:00.16 migration/10 33 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/10 34 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/10 35 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/11 36 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/11 37 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/11 38 root RT -5 0 0 0 S 0.0 0.0 0:00.35 migration/12

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • Clear / Flush cached memory

    - by TheDave
    I have a small VPS with 6GB RAM hosting a couple of websites. Recently I have noticed that my cached memory size is quite high - see below: Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.2%hi, 0.4%si, 0.0%st Mem: 6113256k total, 5949620k used, 163636k free, 398584k buffers Swap: 1048564k total, 104k used, 1048460k free, 3586468k cached After investigating if there is some method to have this flushed or cleared I stumbled upon a command which is: sync; echo 3 > /proc/sys/vm/drop_caches I read it could be useful to add this to a chron-task/job. Is this method recommended or could this lead to potential problems? The only concern I have is that I use one Magento installation on Memcached - could this have any negative effects on it? I am certainly not a pro therefore I would very much appreciate some expert advise. PS: My VPS runs on CentOS 5 x64 and I have WHM + NGINX installed.

    Read the article

  • httpd service keep restarting. after 15-20 mins

    - by niraj
    I have recently purchased Dedicated Server which has 16bg ram and 1TB Harddisk. It has Cpanel and for firewall CSF Installd. I am mainly going to install it for File hosting service. Now the day i moved my httpd service keep restarting every 15-20 mins. It becomes unresponsive after that so have to manually restart it. My httpd settings are Start Servers = 5 Minimum Spare Servers = 5 Maximum Spare Servers = 10 Server Limit = 20000 Max Clients = 10000 Max Requests Per Child = 10000 Keep-Alive = On Keep-Alive Timeout = 5 Max Keep-Alive Requests = Unlimited Timeout 300 TOP is top - 14:53:41 up 1 day, 23:39, 2 users, load average: 0.10, 0.14, 0.09 Tasks: 1563 total, 1 running, 1562 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.6%sy, 0.0%ni, 98.1%id, 0.2%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 16303780k total, 16142048k used, 161732k free, 135264k buffers Swap: 8224760k total, 868k used, 8223892k free, 14136616k cached Please help me in this its keep happning.

    Read the article

  • linux migration/N high cpu consumption

    - by Alexander
    on my linux appliance based on 3.0.0-14 kernel I got: RPN:/tmp# ps axuf | grep migration root 6 92.9 0.0 0 0 ? S Apr23 2788:33 \_ [migration/0] root 7 99.7 0.0 0 0 ? S Apr23 2993:20 \_ [migration/1] my top is RPN:/tmp# top -b -n1 top - 12:03:41 up 2 days, 2:18, 5 users, load average: 25.76, 25.26, 24.73 Tasks: 171 total, 1 running, 168 sleeping, 0 stopped, 2 zombie Cpu(s): 14.0%us, 12.6%sy, 0.8%ni, 72.0%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 1543032k total, 1264728k used, 278304k free, 25308k buffers Swap: 0k total, 0k used, 0k free, 183168k cached My question: why processes "migration/N" take so much CPU?

    Read the article

  • CMD Command to create folder for each file and move file into folder

    - by Tom
    I need a command that can be run from the command line to create a folder for each file (based on the file-name) in a directory and then move the file into the newly created folders. Example : Starting Folder: Dog.jpg Cat.jpg The following command works great at creating a folder for each filename in the current working directory. for %i in (*) do md "%~ni" Result Folder: \Dog\ \Cat\ Dog.jpg Cat.jpg I need to take this one step further and move the file into the folder. What I want to achieve is: \Dog\Dog.jpg \Cat\Cat.jpg Can someone help me with one command to do all of this?

    Read the article

  • xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory)

    - by mazgalici
    root@mazgalici:~# startx X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-28-server i686 Ubuntu Current Operating System: Linux mazgalici 2.6.18-194.26.1.el5.028stab079.2PAE #1 SMP Fri Dec 17 19:34:22 MSK 2010 i686 Kernel command line: quiet Build Date: 10 November 2010 11:25:26AM xorg-server 2:1.7.6-2ubuntu7.4 (For technical support please see ) Current version of pixman: 0.16.4 Before reporting problems, check to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jan 11 01:28:48 2011 (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory) Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log

    Read the article

  • "Cannot allocate memory" while no process seems to be using up memory

    - by omat
    I am not competent on server issues, any help is much appreciated. When try to start a python/django shell on a linux box, I am getting OSError: [Errno 12] Cannot allocate memory. free -m seems to confirm I am out of memory: total used free shared buffers cached Mem: 590 560 29 0 3 37 -/+ buffers/cache: 518 71 Swap: 0 0 0 But I cannot see what is eating up the memory with top or ps aux: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24336 908 0 S 0.0 0.2 0:00.68 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:04.85 ksoftirqd/0 How can I identify the leak? Thanks. BTW, I am not sure if it is relevant, but the machine I am talking about is an AWS EC2 instance with Ubuntu 12 running.

    Read the article

  • High Load average threshold in linux

    - by user2481010
    My one of friend said that his server load average sometime goes above 500-1000, for me it is strange value because I never saw load average more than 10. I asked him give me some snapshot of top and memory usages, he gave following details: TOP USAGES top - 06:06:03 up 117 days, 23:02, 2 users, load average: 147.37, 44.57, 15.95 Tasks: 116 total, 2 running, 113 sleeping, 0 stopped, 1 zombie Cpu(s): 16.6%us, 6.9%sy, 0.0%ni, 9.2%id, 66.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8161648k total, 7779528k used, 382120k free, 3296k buffers Swap: 5242872k total, 1293072k used, 3949800k free, 168660k cached Free $ free -gt total used free shared buffers cached Mem: 7 6 1 0 0 4 -/+ buffers/cache: 1 5 Swap: 4 0 4 Total: 12 6 6 Total cpu $ nproc 8 my question is it possible load average more than 100 on 8 core,12 GB mem Server? because I read many tutorial,article on load average, it said that thumb rule is "number of cores = max load" according to thumb rule here is max load average 16 then how his server running with 147.37 load server? he said that it is least value (147.37) some time goes more than 500.

    Read the article

  • Swap 95%+ , but a lot of free ram memory

    - by Paolo_NL_FR
    I am running centos 5.8 with cpanel. Lately I am getting reports that my swap is full , but there is a lot of free memory to use. top - 10:33:43 up 133 days, 17:00, 1 user, load average: 0.05, 0.03, 0.05 Tasks: 170 total, 1 running, 169 sleeping, 0 stopped, 0 zombie Cpu(s): 2.1%us, 0.5%sy, 0.0%ni, 97.2%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 24726100k total, 8255368k used, 16470732k free, 599560k buffers Swap: 1046520k total, 984740k used, 61780k free, 3641828k cached How do I solve this? The unused ram memory should be used instead of the swap. Or should I increase the swap ( and how do I do that ? ). Thanks

    Read the article

  • Postfix not receiving non-local mail

    - by Davis Sorenson
    I set up a server with Postfix/Dovecot on Linode/Ubuntu 10.04 according to this guide, admittedly I've never done this before. Local mail works just fine, but trying to send email to it from external addresses results in errors like this: Delivery to the following recipient failed permanently: <address>@ni-mate.com Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 553 553 Unknown recipient. (state 13). I honestly have no idea what to do or which configuration files/logs anyone needs to see.

    Read the article

  • What are the current options to encrypted a partition on mac os x ?

    - by symbion
    I recently got my laptop stolen with some sensitive informations on it (personal source code, bank details in a secure file, passwords, etc) and I learnt the lesson: encrypt your sensitive data. Now, I am wondering what are the options to encrypt a partition (not an encrypt disk image) ? Aim: The aim is to prevent anyone (except me) to access those data. Requirement 0: The software must be able to encrypt non system partition. Requirement 1: Plausible deniability is required but preventing cold boot attack is however not an absolute requirement (I am not famous enough or have sensitive enough info to have this kind of requirement). Requirement 2 : Software taking advantage of AES hardware encryption are very welcome as I intent to get a Macbook Pro with i7 CPU (with AES-NI enabled instructions). I will have avirtual machine running in the encrypted partition. Requirement 3 : Free or reasonably cheap. Requirement 4 : Software must run on Mac OS X Snow Leopard or Lion. So far, TrueCrypt is the only option I have found. Regards,

    Read the article

  • Load on Ubuntu 8.04 LTS high

    - by Paddington
    My Ubuntu 8.04 LTS server periodically has a high load avg spike(once every 2 days) resulting in Apache timing out and virtualy everything even SSH to the server is not possible. When I am on the console and run TOP is see that The load avg increases from less than 1 to above 60 in 15 mins. How can I isolate the cause? top - 09:21:51 up 37 days, 20:18, 6 users, load average: 5.41, 5.53, 5.36 Tasks: 160 total, 2 running, 156 sleeping, 0 stopped, 2 zombie Cpu(s): 65.0%us, 8.8%sy, 0.0%ni, 1.0%id,24.6%wa, 0.3%hi, 0.3%si, 0.0%st Mem: 3989468k total, 3444984k used, 544484k free, 360460k buffers Swap: 11687248k total, 178168k used, 11509080k free, 881772k cached

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >