How should I implement simple caches with concurrency on Redis?

Posted by solublefish on Stack Overflow See other posts from Stack Overflow or by solublefish
Published on 2013-11-03T18:50:52Z Indexed on 2013/11/03 21:55 UTC
Read the original article Hit count: 168

Filed under:
|
|

Background

I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis.

I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos.

In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply:

Dictionary<int,Foo> _cacheFooById;
Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId;

Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant:

"For a given group [say by OwnerId], the whole group is in cache or none of it is."

So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't.

What I did already

The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value:

SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo));

I later decided I liked:

HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo));

That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes.

This works to first order. If my high-level code is like:

UpdateCache(myFoo);
AddToIndex(myFoo);

That translates into:

HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo));
var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) );
myFoos.Add(theFoo.OwnerId);
HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos));

However, this is broken in two ways.

  1. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost.
  2. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find.

So how do I index properly?

I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic.

I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC?

I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache.

If I understand it right, I can combine all these things to do the job. Something like:

while(!succeeded)
{
    WATCH( "ServiceCache:Foo:" + theFoo.FooId );
    WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId );
    WATCH( "ServiceCache:FooIndexAll" );
    MULTI();
    SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo));
    SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId);
    SADD ("ServiceCache:FooIndexAll", theFoo.FooId);
    EXEC();
    //TODO somehow set succeeded properly
}

Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together.

All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing.

How do I lock properly?

Even if I had no indexes, there's still a (probably rare) race condition.

A: HGET - cache miss
B: HGET - cache miss
A: SELECT
B: SELECT
A: HSET
C: HGET - cache hit
C: UPDATE
C: HSET
B: HSET ** this is stale data that's clobbering C's update.

Note that C could just be a really-fast A.

Again I think WATCH, MULTI, retry would work, but... ick.

I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here?

Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ?

How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

© Stack Overflow or respective owner

Related posts about concurrency

Related posts about indexing