Database Reference
In-Depth Information
Note that even though the fill factor for the index is 75 percent, Avg. Page Density (full) has decreased to
67.49 percent, which can be computed as follows:
Avg. Page Density (full)
= Average rows per page / Maximum rows per page
= (27 / 5) / 8
= 67.5%
From the preceding example, you can see that the fill factor is applied when the index is created. But later, as the
data is modified, it has no significance. Irrespective of the fill factor, whenever a page splits, the rows of the original
page are distributed between two pages, and avg_page_space_used_in_percent settles accordingly. Therefore, if you
use a nondefault fill factor, you should ensure that the fill factor is reapplied regularly to maintain its effect.
You can reapply a fill factor by re-creating the index or by using ALTER INDEX REORGANIZE or ALTER INDEX REBUILD ,
as was shown. ALTER INDEX REORGANIZE takes the fill factor specified during the index creation into account. ALTER
INDEX REBUILD also takes the original fill factor into account, but it allows a new fill factor to be specified, if required.
Without periodic maintenance of the fill factor, for both default and nondefault fill factor settings,
avg_page_space_used_in_percent for an index (or a table) eventually settles within a narrow range. Therefore, in
most cases, without manual maintenance of the fill factor, the default fill factor is generally good enough.
You should also consider one final aspect when deciding upon the fill factor. Even for a heavy OLTP application,
the number of database reads typically outnumbers writes by a factor of 5 to 10. Specifying a fill factor other than the
default can degrade read performance by an amount inversely proportional to the fill factor setting, since it spreads
keys over a wider area. Before setting the fill factor at a database-wide level, use Performance Monitor to compare
the SOL Server:Buffer Manager:Page reads/sec counter to the SOL Server:Buffer Manager:Page writes/sec
counter, and use the fill factor option only if writes are a substantial fraction of reads (greater than 30 percent). Further,
the optimizer takes the number of pages into account in the choices it makes when constructing the execution plan.
If you have a larger number of pages, you may see some poor choices in plans causing some bad performance.
Automatic Maintenance
In a database with a great deal of transactions, tables and indexes become fragmented over time. Thus, to improve
performance, you should check the fragmentation of the tables and indexes regularly, and you should defragment
the ones with a high amount of fragmentation. You also may need to take into account the workload and defragment
indexes as dictated by the load as well as the fragmentation level of the index. You can do this analysis for a database
by following these steps:
1.
Identify all user tables in the current database to analyze fragmentation.
2.
Determine fragmentation of every user table and index.
3.
Determine user tables and indexes that require defragmentation by taking into account
the following considerations:
A high level of fragmentation where
avg_fragmentation_in_percent is greater than
20 percent
Not a very small table/index—that is,
pagecount is greater than 8
4.
Defragment tables and indexes with high fragmentation.
A sample SQL stored procedure is included here for easy reference. This script will perform the basic actions,
and I include it here for educational purposes. But for a fully functional script that includes a large degree of capability,
I strongly recommend using the script from Michelle Ufford located here: http://bit.ly/1cjmzXv . Another set of
scripts that are popular are Ola Hollengren's scripts at http://bit.ly/JijaNI .
 
Search WWH ::




Custom Search