linux new/delete, malloc/free large memory blocks

Posted by brian_mk on Stack Overflow See other posts from Stack Overflow or by brian_mk
Published on 2010-03-29T18:43:31Z Indexed on 2010/03/29 19:13 UTC
Read the original article Hit count: 330

Filed under:
|
|
|
|

Hi folks,

We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons.

Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'.

This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc.

The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available.

If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system.

Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system.

BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request.

Cheers,

Brian.

© Stack Overflow or respective owner

Related posts about linux

Related posts about new