Databases Reference
In-Depth Information
This query simply tells you how many VLFs you have in your transaction log i le. Having a large
number of VLFs in your transaction log can affect write performance to your transaction log.
More important, it can have a huge effect on how long it takes to restore a database, and how long
it takes a database to become available in a clustering failover. It can also affect how long it takes to
recover a database when your instance of SQL Server is started or restarted. What is considered a
large number of VLFs?
I don't like to see more than a couple of hundred VLF i les in a transaction log. For the most part,
having fewer VLFs is better than having a large number, but I don't worry too much until I start
getting more than 200-300. The most common way to get a high VLF count is when you create a
database in FULL recovery model with the default settings for the size and autogrowth increment
for the transaction log i le, and then you don't take frequent transaction log backups. By default, you
start out with a 1MB transaction log i le that is set to grow by 10% when autogrow kicks in after
the 1MB i le i lls up completely. The now 1.1MB i le will quickly i ll up again, and autogrow will
make it 10% larger. This happens repeatedly; and each time the transaction log i le is grown, more
VLFs are added to the transaction log. If the growth amount is less than 64MB, then 4 VLFs will be
added to the transaction log. If the growth amount is between 64MB and 1GB, then 8 VLFs will
be added to the transaction log. Finally, if the growth amount is over 1GB, then 16 VLFs will be
added to the transaction log.
Knowing this, you can see how a 1MB transaction log i le can grow and end up with tens of thou-
sands of VLFs. The way to avoid this is to manually manage your transaction i le size, and to
change the autogrow increment to a more reasonable value. That way you will have fewer growth
events (whether manual or autogrows), and therefore a lower VLF count. With a relatively large and
active database, I recommend setting the autogowth increment to 8000MB. This way, you only need
a few growth events to grow the transaction i le to a sufi ciently large size, which keeps the VLF
count much lower.
Picking a good size for your transaction log i le depends on a number of factors. First, how much
write activity do you think your database will see with its normal workload? You want to i gure out
how much transaction log activity is generated in an hour, in terms of MB or GB. One easy way to
determine this is to take an uncompressed transaction log backup every hour for a full day. This
gives you a good idea of your average and peak log generation rates. Make sure that your transac-
tion log i le is large enough to hold at least eight hours of normal activity, and consider when and
how often you do maintenance activity such as reorganizing or rebuilding indexes, which generate a
lot of transaction log activity. Creating new indexes on large tables and loading or deleting a lot of
data also creates a lot of transaction log activity.
You should also consider how often you are going to run transaction log backups (in order to help
meet your Recovery Point Objective [RPO] and Recovery Time Objective [RTO]). If you need to
run very frequent transaction log backups, you may be able to have a somewhat smaller transaction
log i le. This also depends on how large your database is and how long it takes to do a full database
backup. While a full database backup is running, transaction log backups will not clear the log i le.
If you have a very slow I/O subsystem and a very large database, your full database backups may
take a long time to complete. You want to size your transaction log i le to be large enough that it
never has to autogrow. One disadvantage to having an extremely large transaction log i le (besides
wasting some disk space) is that it will take quite a bit longer to restore a copy of your database, as
Windows cannot use Windows Instant File Initialization on log i les.
Search WWH ::




Custom Search