Database Reference
In-Depth Information
Of course, there are other ways to accomplish the same task. However, the key point here is the need to make
your code aware of the situations when tempdb is recreated and the staged data is gone.
Optimizing tempdb Performance
Tempdb is usually the busiest database on the server. In addition to temporary objects created by users, SQL Server
uses this database to store internal result sets during query executions, version store, internal temporary tables for
sorting, hashing, and database consistency checking, and so forth. Tempdb performance is a crucial component in
overall server health and performance. Thus in most cases, you would like to put tempdb on the fastest disk array that
you have available.
Redundancy of the array is another issue. On one hand, you do not need to worry much about the data that you
are storing in tempdb. On the other hand, if the tempdb disk array goes down, SQL Server becomes unavailable.
As a general rule then, you would like to have disk array redundant.
Although, in some cases, when tempdb performance becomes a bottleneck and your high-availability strategy
supports simultaneous failure of two or more nodes, and furthermore there are spare parts available and there is a
process in place that allows you to bring the failed node(s) online quickly, you could consider making tempdb disk
array non-redundant. This is dangerous route, however, and you need to consider the pros and cons of this decision
very carefully.
There is a trace flag T1118 that prevents SQL Server from using mixed extents for space allocation. By allocating
uniform extents only, you would reduce the number of changes required in the allocation map pages during object
creation. Moreover, even if temporary table caching keeps only one data page cached, that page would belong to its
own free uniform extent. As a result, SQL Server does not need to search, and potentially allocate, the mixed extents
with free pages available during the allocation of pages two to eight of the table. Those page can be stored in the same
uniform extent in which the first cached data page belongs.
The bottom line is that trace flag T1118 can significantly reduce allocation map pages contention in tempdb. This
trace flag should be enabled in every SQL Server instance; that is, there is no downside to doing this.
Another way to reduce contention is by creating multiple tempdb data files. Every data file has its own set of
allocation map pages and, as a result, allocations are spread across these files and pages. This reduces the chances of
contention because fewer threads are then competing simultaneously for access to the same allocation map pages.
There is no generic rule that defines the optimal number of tempdb data files—everything depends on the
actual system workload and behavior. The old guidance to have the number of data files equal the number of logical
processors is no longer the best advice. While that approach still works, an extremely large number of the data files
could even degrade the performance of the system in some cases.
The Microsoft CSS team performed the stress test of tempdb performance using a server with 64 logical
processors running under a heavy load with 500 connections that create, populate, and drop temporary tables
into the loop. Table 12-3 displays the execution time based on the number of files in tempdb and a Trace Flag 1118
configuration.
Table 12-3. Execution time based on the number of data files in tempdb
1 data file
8 data files
32 data files
64 data files
Without T1118
1,080 seconds
45 seconds
17 seconds
15 seconds
With T1118
525 seconds
38 seconds
15 seconds
15 seconds
As you see, creating more than one data file dramatically improved tempdb performance, although it stabilizes
at some point. For instance, there was only a marginal difference in performance between the 32 and 64 data file
scenarios.
 
 
Search WWH ::




Custom Search