ANTS Memory Profiler 7.0
Posted
by James Michael Hare
on Geeks with Blogs
See other posts from Geeks with Blogs
or by James Michael Hare
Published on Thu, 10 Mar 2011 20:55:39 GMT
Indexed on
2011/03/11
0:11 UTC
Read the original article
Hit count: 417
I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me.
Background
While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program.
As an example, I’d been working on a data access layer at work to call a market data web service. This web service would take a list of symbols to quote and would return back the quote data. To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data. Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time.
A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information. The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage.
First Impressions
The main thing I’ve always loved about the ANTS tools is their ease of use. Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required. I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks.
Not so with AMP. The opening dialog is very straightforward. You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc.
So I chose a .NET Executable and navigated to the build location of my test harness. Then began profiling.
At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections. At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.
Snapshots
Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent).
Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes. Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer. I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh!
Analysis
It looks like (from the second pie chart) that the majority of the allocations were in the string class. This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory.
I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”. For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it.
There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for.
Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same. Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control:
Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends.
Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage. As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.
I was curious about this, so I selected it and went into the [Instance Categorizer]. This view let me see where these instances to RuntimeMehtodInfo were coming from.
So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy.
I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find. For example, if I suspected a memory leak, I might try to filter for survivors in growing classes. This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection. This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them!
Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them.
Visual Studio Integration
Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS.
So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.
Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in. It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it.
Summary
If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler. It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for.
I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all. It’s built for your everyday developer to get in and find their problems quickly, and I like that!
Tweet |
© Geeks with Blogs or respective owner