Background:
I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down.
Problem :
Profiling the application gives me time taken by functions.
This time increases on high load.
Time consumed may be due to complex calculation or for waiting for CPU.
so, how to find the CPU cycles consumed by a piece of code ?
Since, reducing the CPU consumption will increase the response time.
I might have written extremely efficient code and need to add more CPU power
OR
I might have some stupid code taking the CPU and causing the slow down ?
Any help is appreciated !
Update:
I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users]
I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request.
More Info
Configuration Nginx + Uwsgi with 4 workers
No database used, using a responses from a REST API
On 1st hit the response of REST API gets cached, therefore doesn't makes a difference.
Using ujson for json parsing.
Curious to Know:
Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools.
All those I found were casual snippets of code that perform profiling.