Alternative or succesor to GDBM

Posted by Anon Guy on Stack Overflow See other posts from Stack Overflow or by Anon Guy
Published on 2009-03-29T21:28:40Z Indexed on 2010/06/08 5:22 UTC
Read the original article Hit count: 284

We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem.

This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger.

While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks.

As I see it, my options are:

  1. Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning.

  2. Move to a different key-value database, such as memcachedb or Tokyo Cabinet.

  3. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it).

How should I approach this problem?

© Stack Overflow or respective owner

Related posts about performance-tuning

Related posts about nfs