Minimum percentage of free physical memory that Linux require for optimal performance
Posted
by csoto
on Oracle Blogs
See other posts from Oracle Blogs
or by csoto
Published on Tue, 22 Oct 2013 20:17:29 +0000
Indexed on
2013/10/22
21:59 UTC
Read the original article
Hit count: 220
Filed under:
/Oracle/Exalogic
Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes.
Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory.
The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes.
IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers
The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"):
Free Memory and Used Memory
Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand.
That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally?
The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.
Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory.
The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes.
IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers
The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"):
Free Memory and Used Memory
Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand.
That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally?
The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.
© Oracle Blogs or respective owner