I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider { void Add(string key, object item, int duration); T Get<T>(string key) where T : class; void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider { public AzureCacheProvider() { _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to } private readonly DataCache _cache; public void Add(string key, object item, int duration) { _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration)); } public T Get<T>(string key) where T : class { return _cache.Get(key) as T; } public void Remove(string key) { _cache.Remove(key); } } public class LocalCacheProvider : ICacheProvider { public LocalCacheProvider() { _cache = HttpRuntime.Cache; } private readonly System.Web.Caching.Cache _cache; public void Add(string key, object item, int duration) { _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration); } public T Get<T>(string key) where T : class { return _cache[key] as T; } public void Remove(string key) { _cache.Remove(key); } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x => { x.Scan(scan => { scan.AssembliesFromApplicationBaseDirectory(); scan.WithDefaultConventions(); }); if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable) x.For<ICacheProvider>().Use<AzureCacheProvider>(); else x.For<ICacheProvider>().Use<LocalCacheProvider>(); }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo { public MyRepo(ICacheProvider cacheProvider) { _context = new MyDataContext(); _cache = cacheProvider; } private readonly MyDataContext _context; private readonly ICacheProvider _cache; public SomeType Get(int someTypeID) { var key = "somename-" + someTypeID; var cachedObject = _cache.Get<SomeType>(key); if (cachedObject != null) { _context.SomeTypes.Attach(cachedObject); return cachedObject; } var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID); _cache.Add(key, someType, 60000); return someType; } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.