Fragmented Log files could be slowing down your database

Posted by Fatherjack on Simple Talk See other posts from Simple Talk or by Fatherjack
Published on Tue, 13 Nov 2012 16:00:00 +0000 Indexed on 2012/11/13 17:13 UTC
Read the original article Hit count: 603

Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data.

Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files.

How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file.

Let’s get to see how many VLFs we have in a brand new database.

USE master
GO

CREATE DATABASE VLF_Test ON ( NAME = VLF_Test,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf',
    SIZE = 100,
    MAXSIZE = 500,
    FILEGROWTH = 50 ) LOG ON
( NAME = VLF_Test_Log,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf',
    SIZE = 5MB,
    MAXSIZE = 250MB,
    FILEGROWTH = 5MB );
go

USE VLF_Test
go

DBCC LOGINFO;

The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor.

VLF_01

The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB.

Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by

LOG ON
( NAME = VLF_Test_Log,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf',
    SIZE = 1GB,
    MAXSIZE = 25GB,
    FILEGROWTH = 1GB );

With a bigger log file specified we get more VLFs

VLF_03

What if we make it bigger again?

LOG ON
( NAME = VLF_Test_Log,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf',
    SIZE = 5GB,
    MAXSIZE = 250GB,
    FILEGROWTH = 5GB );

This time we see more VLFs are created within our log file.

We now have our 5GB log file comprised of 16 files of 320MB each.

VLF_02

In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic.

  • If the file growth is lower than 64MB then 4 VLFs are created
  • If the growth is between 64MB and 1GB then 8 VLFs are created
  • If the growth is greater than 1GB then 16 VLFs are created.

Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%.

VLF_04

The database we just created is 6 MB, let’s add some data and see what happens.

USE vlf_test 
go
-- we need somewhere to put the data so, a table is in order
IF OBJECT_ID('A_Table') IS NOT NULL
    DROP TABLE A_Table
go
CREATE TABLE A_Table
(
 Col_A int IDENTITY,
 Col_B CHAR(8000)
)
GO

-- Let's check the state of the log file
-- 4 VLFs found
EXECUTE ('DBCC LOGINFO');
go

-- We can go ahead and insert some data and then check the state of the log file again
INSERT  A_Table (col_b)
SELECT TOP 500
REPLICATE('a',2000)
FROM sys.columns AS sc, sys.columns AS sc2 
GO
-- insert 500 rows and we get 22 VLFs
EXECUTE ('DBCC LOGINFO');
go

-- Let's insert more rows
INSERT  A_Table (col_b)
SELECT TOP 2000
REPLICATE('a',2000)
FROM sys.columns AS sc, sys.columns AS sc2 
GO 10

-- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs
EXECUTE ('DBCC LOGINFO');

Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property.

How do we prevent excessive VLF creation?

Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal.

How do we resolve a situation of a database with too many VLFs?

This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are:

    • BACKUP LOG YourDBName TO YourBackupDestinationOfChoice
    • Shrink the log file to its smallest possible size
      • DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) *
    • Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year.
ALTER DATABASE YourDBName 
MODIFY FILE 
( NAME = FileNameOfTLogHere,
SIZE = TheSizeYouWantItToBeIn_MB)

* – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need.

The resize step can take quite a while

This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx.

The result of this will be a log file with a VLF count according to the bullet list above.

Knowing when VLFs are being created

By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away.

Hassle free monitoring of VLFs

If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there.

Resources
Disclosure

I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

© Simple Talk or respective owner

Related posts about Uncategorized

Related posts about Best Practise