Performance Tuning a High-Load Apache Server
Posted
by
futureal
on Server Fault
See other posts from Server Fault
or by futureal
Published on 2011-02-04T00:34:38Z
Indexed on
2011/02/04
15:27 UTC
Read the original article
Hit count: 438
I am looking to understand some server performance problems I am seeing with a (for us) heavily loaded web server. The environment is as follows:
- Debian Lenny (all stable packages + patched to security updates)
- Apache 2.2.9
- PHP 5.2.6
- Amazon EC2 large instance
The behavior we're seeing is that the web typically feels responsive, but with a slight delay to begin handling a request -- sometimes a fraction of a second, sometimes 2-3 seconds in our peak usage times. The actual load on the server is being reported as very high -- often 10.xx or 20.xx as reported by top
. Further, running other things on the server during these times (even vi
) is very slow, so the load is definitely up there. Oddly enough Apache remains very responsive, other than that initial delay.
We have Apache configured as follows, using prefork:
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 0
And KeepAlive as:
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5
Looking at the server-status page, even at these times of heavy load we are rarely hitting the client cap, usually serving between 80-100 requests and many of those in the keepalive state. That tells me to rule out the initial request slowness as "waiting for a handler" but I may be wrong.
Amazon's CloudWatch monitoring tells me that even when our OS is reporting a load of > 15, our instance CPU utilization is between 75-80%.
Example output from top
:
top - 15:47:06 up 31 days, 1:38, 8 users, load average: 11.46, 7.10, 6.56
Tasks: 221 total, 28 running, 193 sleeping, 0 stopped, 0 zombie
Cpu(s): 66.9%us, 22.1%sy, 0.0%ni, 2.6%id, 3.1%wa, 0.0%hi, 0.7%si, 4.5%st
Mem: 7871900k total, 7850624k used, 21276k free, 68728k buffers
Swap: 0k total, 0k used, 0k free, 3750664k cached
The majority of the processes look like:
24720 www-data 15 0 202m 26m 4412 S 9 0.3 0:02.97 apache2
24530 www-data 15 0 212m 35m 4544 S 7 0.5 0:03.05 apache2
24846 www-data 15 0 209m 33m 4420 S 7 0.4 0:01.03 apache2
24083 www-data 15 0 211m 35m 4484 S 7 0.5 0:07.14 apache2
24615 www-data 15 0 212m 35m 4404 S 7 0.5 0:02.89 apache2
Example output from vmstat
at the same time as the above:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
8 0 0 215084 68908 3774864 0 0 154 228 5 7 32 12 42 9
6 21 0 198948 68936 3775740 0 0 676 2363 4022 1047 56 16 9 15
23 0 0 169460 68936 3776356 0 0 432 1372 3762 835 76 21 0 0
23 1 0 140412 68936 3776648 0 0 280 0 3157 827 70 25 0 0
20 1 0 115892 68936 3776792 0 0 188 8 2802 532 68 24 0 0
6 1 0 133368 68936 3777780 0 0 752 71 3501 878 67 29 0 1
0 1 0 146656 68944 3778064 0 0 308 2052 3312 850 38 17 19 24
2 0 0 202104 68952 3778140 0 0 28 90 2617 700 44 13 33 5
9 0 0 188960 68956 3778200 0 0 8 0 2226 475 59 17 6 2
3 0 0 166364 68956 3778252 0 0 0 21 2288 386 65 19 1 0
And finally, output from Apache's server-status
:
Server uptime: 31 days 2 hours 18 minutes 31 seconds
Total accesses: 60102946 - Total Traffic: 974.5 GB
CPU Usage: u209.62 s75.19 cu0 cs0 - .0106% CPU load
22.4 requests/sec - 380.3 kB/second - 17.0 kB/request
107 requests currently being processed, 6 idle workers
C.KKKW..KWWKKWKW.KKKCKK..KKK.KKKK.KK._WK.K.K.KKKKK.K.R.KK..C.C.K
K.C.K..WK_K..KKW_CK.WK..W.KKKWKCKCKW.W_KKKKK.KKWKKKW._KKK.CKK...
KK_KWKKKWKCKCWKK.KKKCK..........................................
................................................................
From my limited experience I draw the following conclusions/questions:
We may be allowing far too many
KeepAlive
requestsI do see some time spent waiting for IO in the vmstat although not consistently and not a lot (I think?) so I am not sure this is a big concern or not, I am less experienced with vmstat
Also in vmstat, I see in some iterations a number of processes waiting to be served, which is what I am attributing the initial page load delay on our web server to, possibly erroneously
We serve a mixture of static content (75% or higher) and script content, and the script content is often fairly processor intensive, so finding the right balance between the two is important; long term we want to move statics elsewhere to optimize both servers but our software is not ready for that today
I am happy to provide additional information if anybody has any ideas, the other note is that this is a high-availability production installation so I am wary of making tweak after tweak, and is why I haven't played with things like the KeepAlive
value myself yet.
© Server Fault or respective owner