Limiting object allocation over multiple threads
- by John
I have an application which retrieves and caches the results of a clients query. The client then requests different chunks of data and the application sends the relevant results and removes them from the cache.
A new requirement for this application is that there needs to be a run-time configurable maximum number of results which may be cached. I've taken the naive approach and implemented this by using a counter under a lock which is incremented every time a result is cached and decremented whenever a result is removed from the cache.
Unfortunately, this has drastically reduced the applications performance when processing a large number of concurrent requests. I have tried both a critical section lock and spin-lock; the performance improves a bit with a spin-lock, but is still unacceptably slow. Is there a better way to solve this problem which may improve performance?
Right now I have a thread pool that services requests and each request is tied to a Request object which stores that cached results for that particular request. Here is a simplified pseudo code version of my current implementation:
void ResultCallback( Result result, Request *request )
{
lock totalResultsCached
lock cachedLimit
if( totalResultsCached + 1 > cachedLimit )
{
unlock cachedLimit
unlock totalResultsCached
//cancel the request
return;
}
++totalResultsCached;
unlock cachedLimit
unlock totalResultsCached
request.add(result)
}
void SendResults( int resultsToSend, Request *request )
{
while ( resultsToSend > 0 )
{
send(request.remove())
lock totalResultsCached
--totalResultsCached
unlock totalResultsCached
--resultsToSend;
}
}