Database Reference
In-Depth Information
Keep the Statistics Up-to-Date
The performance impact of database statistics is explained in detail in Chapter 12; however, this short list will serve as
a quick and easy reference for keeping your statistics up-to-date:
Allow SQL Server to automatically maintain the statistics of the data distribution in the tables
by using the default settings for the configuration parameters AUT0_CREATE_ STATISTICS and
AUTO_UPDATE_STATISTICS .
As a proactive measure, you can programmatically update the statistics of every database
object on a regular basis as you determine it is needed and supported within your system. This
practice partly protects your database from having outdated statistics in case the auto update
statistics feature fails to provide a satisfactory result. In Chapter 12, I illustrate how to set up a
SQL Server job to programmatically update the statistics on a regular basis.
Remember that you also have the ability to update the statistics in an asynchronous fashion.
This reduces the contention on stats as they're being updated; thus, if you have a system with
fairly constant access, you can use this method to update the statistics more frequently.
please ensure that the statistics update job is scheduled before the completion of the index defragmentation
job, as explained later in this chapter.
Note
Maintain a Minimum Amount of Index Defragmentation
The following best practices will help you maintain a minimum amount of index defragmentation:
Defragment a database on a regular basis during nonpeak hours.
On a regular basis, determine the level of fragmentation on your indexes; then, based on that
fragmentation, either rebuild the index or defrag the index by executing the defragmentation
queries outlined in Chapter 13.
Remember that very small tables don't need to be defragmented at all.
Different rules may apply for very large databases when it comes to defragmenting indexes.
If you have indexes that are only ever used for single seek operations, then fragmentation
doesn't impact performance.
Avoid Database Functions Such As AUTO_CLOSE or AUTO_SHRINK
AUTO_CLOSE cleanly shuts down a database and frees all its resources when the last user connection is closed. This
means all data and queries in the cache are automatically flushed. When the next connection comes in, not only does
the database have to restart but all the data has to be reloaded into the cache. Also, stored procedures and the other
queries have to be recompiled. That's an extremely expensive operation for most database systems. Leave AUT0_CL0SE
set to the default of OFF .
AUTO_SHRINK periodically shrinks the size of the database. It can shrink the data files and, when in Simple
Recovery mode, the log files. While doing this, it can block other processes, seriously slowing down your system.
More often than not, file growth is also set to occur automatically on systems with AUTO_SHRINK enabled, so your
system will be slowed down yet again when the data or log files have to grow. Further, you're going to see the physical
file storage get fragmented at the operating system level, seriously impacting performance. Set your database sizes
to an appropriate size, and monitor them for growth needs. If you must grow them automatically, do so by physical
increments, not by percentages.
 
 
Search WWH ::




Custom Search