Hello,
I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory.
When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances.
This is information on the JBoss configuration:
JBoss (MX MicroKernel) 4.0.3
JDK 1.6.0 update 22
PermSize=512m
MaxPermSize=512m
Xms=1024m
Xmx=6144m
This is basic info on the system:
Operating system: CentOS Linux 5.5
Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64
Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores
This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup:
Running processes: 183
CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins)
CPU usage: 0% user, 0% kernel, 1% IO, 99% idle
Real memory: 17.38 GB total, 2.46 GB used
Virtual memory: 19.59 GB total, 0 bytes used
Local disk space: 113.37 GB total, 11.89 GB used
When JBoss freezes, system information looks like this:
Running processes: 225
CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins)
CPU usage: 0% user, 12% kernel, 73% IO, 15% idle
Real memory: 17.38 GB total, 17.18 GB used
Virtual memory: 19.59 GB total, 706.29 MB used
Local disk space: 113.37 GB total, 11.89 GB used