Automatically Kill/Restart Process(es) When Memory is Critically Low

Posted by nemesisfixx on Server Fault See other posts from Server Fault or by nemesisfixx
Published on 2014-08-20T17:46:19Z Indexed on 2014/08/21 4:22 UTC
Read the original article Hit count: 455

I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap!

So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!).

So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway).

So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak?


In brief:

  1. Monitor overall system RAM usage
  2. When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it.
  3. Rinse and repeat...

© Server Fault or respective owner

Related posts about linux

Related posts about memory