Databases Reference
In-Depth Information
that query. So the DBA still has some work to do in deciding which indexes to implement to get the best
compromise between insert, update, and delete performance.
Now you'll jump straight into using DTA to create some indexes.
UsingDTA to Tune Individual Queries
Imagine a scenario where a developer DBA is writing queries for a new database and wants to create an
initial set of indexes. You have to have a database with data in it and that data has to be representative
of the final data distribution. In the sample workload, you'll examine the index recommendations with
three levels of data in the target tables. Table 11-1 lists the numbers of rows in each table where you'll
run DTA for the DTApeopleInsert.sql query.
Table 11-1: Number of Rows for Each Tuning Stage
Table
Insert
Update
People
0
1,000,000
BoysNames
100
100
GirlsNames
100
100
LastNames
2,000
2,000
Before starting to run DTA, you need to figure out how to determine the effectiveness of each of the DTA
recommendations. DTA will give you its expectation of performance improvement, but you should check
its effectiveness for yourself, so you need to have some way to measure before and after performance.
You'll use three metrics for this. The first is the insert time for each row. To get this you can use a simple
stored procedure that calls the insert stored procedure multiple times and reports how many rows it has
inserted and the insert rate at pre-determined intervals. The second metric is the output of the statis-
tics IO . You can gather this data using SQL Server management Studio, by turning on the Query option
for statistics IO. The third metric is the statistics time .
Before you start tuning, you need to capture your starting metrics. To make sure you get consistent
results, you also need to capture a cold time and several warm times and then average the warm times.
A cold time is the time it takes to run a query against SQL Server for the first time. The execution plan
and data aren't in cache, so everything has to be created from scratch. Subsequent runs of the query
will be much quicker (warmed up) because of caching and will be more representative of a live system.
Measuring performance gains against average warm times will give you the best idea of what gains to
expect on a busy system.
One other thing you'll look at in this example is the wait time during the query execution. You can only
do this if you run the procedure in a tight loop because unless the query is very slow-running, you won't
be able to capture the instantaneous results you need to see any waits. By running in a tight loop, you
can sample the wait stats repeatedly and stand a good chance of seeing what the query is waiting on.
Search WWH ::




Custom Search