The environment:
Amazon EC2 Instance - m1.medium
Ubuntu 12.04
Apache 2.2.22 - Running a Drupal Site
Using MySQL DB Server
RAM info:
~$ free -gt
total used free shared buffers cached
Mem: 3 1 2 0 0 0
-/+ buffers/cache: 0 2
Swap: 0 0 0
Total: 3 1 2
Hard drive info:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 4.7G 2.9G 62% /
udev 1.9G 8.0K 1.9G 1% /dev
tmpfs 751M 180K 750M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1.9G 0 1.9G 0% /run/shm
/dev/xvdb 394G 199M 374G 1% /mnt
The problem
About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message:
kernel: [2963685.664359] [31716] 106 31716 226946 22748 0
0 0 mysqld
kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld)
score 23 or sacrifice child
kernel: [2963685.664764] Killed process 31716 (mysqld)
total-vm:907784kB, anon-rss:90992kB, file-rss:0kB
kernel: [2963686.153608] init: mysql main process (31716)
killed by KILL signal
kernel: [2963686.169294] init: mysql main process ended, respawning
That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log:
Plugin 'FEDERATED' is disabled.
The InnoDB memory heap is disabled
Mutexes and rw_locks use GCC atomic builtins
Compressed tables use zlib 1.2.3.4
Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
Completed initialization of buffer pool
Fatal error: cannot allocate memory for the buffer pool
Plugin 'InnoDB' init function returned error.
Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
Unknown/unsupported storage engine: InnoDB
[ERROR] Aborting
[Note] /usr/sbin/mysqld: Shutdown complete
I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so:
<IfModule mpm_prefork_module>
...
MaxClients 60
</IfModule>
<IfModule mpm_worker_module>
...
MaxClients 60
</IfModule>
<IfModule mpm_event_module>
...
MaxClients 60
</IfModule>
Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message:
kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0,
oom_adj=0, oom_score_adj=0
kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0
kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal
Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf )
vm.overcommit_memory = 2
vm.overcommit_ratio = 80
So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege.
Thanks a bunch in advance.