Measuring performance indicators on a cluster

Posted by Aditya Singh on Server Fault See other posts from Server Fault or by Aditya Singh
Published on 2012-06-09T20:19:03Z Indexed on 2012/06/09 22:42 UTC
Read the original article Hit count: 236

Filed under:
|
|

My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances.

Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request.

I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

© Server Fault or respective owner

Related posts about nginx

Related posts about monitoring