Database Reference
In-Depth Information
Transaction Log Management
It is better to manage transaction log size manually than to allow SQL Server to auto-grow it. Unfortunately, it is
not always easy to determine optimal log size. On one hand, you want the transaction log to be big enough to avoid
auto-growth events. On the other hand, you would like to keep the log small, saving disk space and reducing the time
required to zero-initialize the log when the database is restored from a backup.
remember to keep some space reserved in the log file if you are using any high-availability or other technologies
that rely on transaction log records. SQL Server is not able to truncate transaction logs during log backups if something
goes wrong with those processes. Moreover, you should implement a monitoring and notification framework, which alerts
you to such conditions and gives you time to react before the transaction log becomes full.
Tip
Another important factor is the number of VLFs in the log files. You should avoid the situation where the
transaction log becomes overly fragmented and has a large number of small VLFs. Similarly, you should avoid the
situation where the log has too few, very large VLFs.
For the databases that require a large transaction log, you can pre-allocate space using 8000MB chunks, which
generates 16 VLFs of 500MB each. If a database does not require a large (more than 8000MB) transaction log, you can
pre-allocate log space in one operation based on the size requirements.
there is a bug in SQL Server 2005-2008r2, which incorrectly grows the transaction log if its size is in
multiples of 4gB. You can use multiples of 4000MB instead. this bug has been fixed in SQL Server 2012.
Note
You should still allow SQL Server to auto-grow the transaction log in case of an emergency. However, choosing
the right auto-growth size is tricky. For databases with large transaction logs, it is wise to use 8000MB to reduce
the number of VLFs. However, zeroing-out 8000MB of newly allocated space can be time consuming. All database
activities that write to the log file are blocked during the auto-growth process. This is another argument for manual
transaction log size management.
the decision of what auto-growth size should be used depends on the performance of the i/o subsystem.
You should analyze how long zero-initialization takes and find a sweet spot where the auto-growth time and the size of
the generated VLFs are acceptable. 1gB auto-growth could work in many cases.
Tip
SQL Server writes to the transaction log synchronously in the case of data modifications. OLTP systems, with
volatile data and heavy transaction log activity, should have the transaction log stored on a disk array with good
write performance and low latency. Transaction log I/O performance is less important when the data is static, for
example in data warehouse systems; however, you should consider how it affects the performance and duration of
the processes that refresh data there.
Best practices suggest storing the transaction log on a dedicated disk array optimized for sequential write
performance. This is great advice for the situation where an underlying I/O subsystem has enough power to
accommodate multiple high-performance disk arrays. In some cases, however, when faced with budget constraints
and not enough disk drives, you can achieve better I/O performance by storing data and log files on a single disk
array. You should remember, however, that keeping data and log files on the same disk array could lead to data loss in
case of a disk array failure.
 
 
Search WWH ::




Custom Search