Databases Reference
In-Depth Information
resulting in a more selective index. Columns that have high-cardinality or distinct values are
highly selective and should be up front in a composite index.
SimpleAccess Pattern Indexing
After you have the basics all set up, go over and ask software developers for any of their software class
or UML modeling diagrams. DBAs should get these or at least ask for them when they get the logical
data models. Class-modeling diagrams have some good information about how the software is designed
to interact with the database. If you look for the methods on the classes, you might see methods called
GetEmployeeByName() or SelEmployeesByBirthDate() . These method calls provide excellent informa-
tion about how you should be indexing other attributes in the database model that are not part of key
structures but are providing a level of access.
In the two Employee selection method examples above, you would want to examine the model around
the employee to determine where the attributes of Employee Name and Birth Date are stored. Then add
non-clustered indexes around these attributes. If you don't have access to the modeling instructions,
you'll have to have similar information stored on a regular basis from the DMV for indexes.
ComplexAccess Pattern Indexing
Look at adding covering indexes to eliminate bookmark lookups. One of the easiest ways to find
these index candidates is by looking for high values in the User_Lookups column in the Index DMV
sys.dm_db_index_usage_stats . It may be possible to speed up some queries by adding columns as
included columns into non-clustered indexes for performance reasons. If a high volume (repetitive) TSLQ
query can use a few columns stuffed into the nonclusted index to avoid scanning a clustered index, this
is a significant reduction in I/O activity. The trick is figuring out which columns are referenced by the
query. These columns are the select list columns and any required join or sort columns. This technique
of covering a query with an index is particularly useful and not only reduces I/O, it speeds up response
time. However, it takes more time to carefully evaluate and tune.
Benchmarking for Evaluation
Earlier in the chapter, we encouraged the creation of a set of benchmark tables to store some metadata
about the nature of the database objects. You categorized tables in a database according to their functional
purposes and stitched them together with high impact use-cases to be able to talk back to business owners
about database issues in their terms. In SQL Server 2000, you could not get object-level usage statistics.
In this section, you'll use the DMVs exposed in SQL Server 2005 to add to this metadata the statistical
information that you'll need to be able to refer to when deciding whether the database is meeting the
performance requirement expectations.
Creating theBenchmark Schema
Earlier we discussed capturing information in a meta-data format for expected performance requirements
like persona and use cases. These data are useful when you can connect these user-concepts to real
metrics from the database objects themselves. As part of this chapter, we've created a model to allow
the capturing of database statistics. To create these tables download the script from www.wrox.com .The
scripts will create the tables and populate the type table information to match the discussions earlier in
the performance requirements section of this chapter.
Search WWH ::




Custom Search