Determining a realistic measure of requests per second for a web server

Posted by Don on Server Fault See other posts from Server Fault or by Don
Published on 2012-03-22T00:33:19Z Indexed on 2012/03/31 5:31 UTC
Read the original article Hit count: 408

Filed under:
|
|
|

I'm setting up a nginx stack and optimizing the configuration before going live. Running ab to stress test the machine, I was disappointed to see things topping out at 150 requests per second with a significant number of requests taking > 1 second to return. Oddly, the machine itself wasn't even breathing hard.

I finally thought to ping the box and saw ping times around 100-125 ms. (The machine, to my surprise, is across the country). So, it seems like network latency is dominating my testing. Running the same tests from a machine on the same network as the server (ping times < 1ms) and I see > 5000 requests per second, which is more in-line with what I expected from the machine.

But this got me thinking: How do I determine and report a "realistic" measure of requests per second for a web server? You always see claims about performance, but shouldn't network latency be taken into consideration? Sure I can serve 5000 request per second to a machine next to the server, but not to a machine across the country. If I have a lot of slow connections, they will eventually impact my server's performance, right? Or am I thinking about this all wrong?

Forgive me if this is network engineering 101 stuff. I'm a developer by trade.

Update: Edited for clarity.

© Server Fault or respective owner

Related posts about nginx

Related posts about webserver