A quick look at: sys.dm_os_buffer_descriptors
- by Jonathan Allen
SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb]
GO
SELECT COUNT(*) AS cached_pages_count ,
CASE database_id
WHEN 32767 THEN 'ResourceDb'
ELSE DB_NAME(database_id)
END AS Database_name
FROM sys.dm_os_buffer_descriptors
GROUP BY DB_NAME(database_id) ,
database_id
ORDER BY cached_pages_count DESC;
This gives you results which are quite useful, but if you add a new column with the code:
…to convert the pages value to show a MB value then they become more relevant and meaningful.
To see how your server reacts to queries, start up SSMS and connect to a test server and database – mine is called AdventureWorks2008. Make sure you start from a know position by running:
-- Only run this on a test server otherwise your production server's-- performance may drop off a cliff and your phone will start ringing.
DBCC DROPCLEANBUFFERS
GO
Now we can run a query that would normally turn a DBA’s hair white:
USE [AdventureWorks2008]
go
SELECT *
FROM [Sales].[SalesOrderDetail] AS sod
INNER JOIN [Sales].[SalesOrderHeader] AS soh
ON [sod].[SalesOrderID] = [soh].[SalesOrderID]
…and then check our cache situation:
A nice low figure – not! Almost 2000 pages of data in cache equating to approximately 15MB. Luckily these tables are quite narrow; if this had been on a table with more columns then this could be even more dramatic.
So, let’s make our query more efficient. After resetting the cache with the DROPCLEANBUFFERS and FREEPROCCACHE code above, we’ll only select the columns we want and implement a WHERE predicate to limit the rows to a specific customer.
SELECT [sod].[OrderQty] ,
[sod].[ProductID] ,
[soh].[OrderDate] ,
[soh].[CustomerID]
FROM [Sales].[SalesOrderDetail] AS sod
INNER JOIN [Sales].[SalesOrderHeader] AS soh
ON [sod].[SalesOrderID] = [soh].[SalesOrderID]
WHERE [soh].[CustomerID] = 29722
…and check our effect cache:
Now that is more sympathetic to our server and the other systems sharing its resources.
I can hear you asking: “What has this got to do with logging, Jonathan?”
Well, a smart DBA will keep an eye on this metric on their servers so they know how their hardware is coping and be ready to investigate anomalies so that no ‘disruptive’ code starts to unsettle things.
Capturing this information over a period of time can lead you to build a picture of how a database relies on the cache and how it interacts with other databases. This might allow you to decide on appropriate schedules for over night jobs or otherwise balance the work of your server.
You could schedule this job to run with a SQL Agent job and store the data in your DBA’s database by creating a table with:
IF OBJECT_ID('CachedPages') IS NOT NULL
DROP TABLE CachedPages
CREATE TABLE CachedPages
(
cached_pages_count INT ,
MB INT ,
Database_Name VARCHAR(256) ,
CollectedOn DATETIME DEFAULT GETDATE()
)
…and then filling it with:
INSERT INTO [dbo].[CachedPages]
( [cached_pages_count] ,
[MB] ,
[Database_Name]
)
SELECT COUNT(*) AS cached_pages_count ,
( COUNT(*) * 8.0 ) / 1024 AS MB ,
CASE database_id
WHEN 32767 THEN 'ResourceDb'
ELSE DB_NAME(database_id)
END AS Database_name
FROM sys.dm_os_buffer_descriptors
GROUP BY database_id
After this has been left logging your system metrics for a while you can easily see how your databases use the cache over time and may see some spikes that warrant your attention.
This sort of logging can be applied to all sorts of server statistics so that you can gather information that will give you baseline data on how your servers are performing. This means that when you get a problem you can see what statistics are out of their normal range and target you efforts to resolve the issue more rapidly.