Deploying Memcached as 32bit or 64bit?

Posted by rlotun on Server Fault See other posts from Server Fault or by rlotun
Published on 2011-01-12T18:13:09Z Indexed on 2011/01/12 18:55 UTC
Read the original article Hit count: 117

Filed under:
|
|
|
|

I'm curious about how people deploy memcached on 64 bit machines. Do you compile a 64bit (standard) memcached binary and run that, or do people compile it in 32bit mode and run N instances (where N = machine_RAM / 4GB)?

Consider a recommended deployment of Redis (from the Redis FAQ):

Redis uses a lot more memory when compiled for 64 bit target, especially if the
dataset is composed of many small keys and values. Such a database will, for
instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB
for 64 bit! That's a big difference. You can run 32 bit Redis binaries in a 64
bit Linux and Mac OS X system without problems. For OS X just use make 32bit.
For Linux instead, make sure you have libc6-dev-i386 installed, then use make
32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you
have to edit the Makefile and replace "-arch i386" with "-m32".  If your
application is already able to perform application-level sharding, it is very
advisable to run N instances of Redis 32bit against a big 64 bit Redis box
(with more than 4GB of RAM) instead than a single 64 bit instance, as this is
much more memory efficient.

Would not the same recommendation also apply to a memcached cluster?

© Server Fault or respective owner

Related posts about linux

Related posts about memory