Databases Reference
In-Depth Information
INCREASING THE MAX WORKER THREADS SETTING
Running out of worker threads ( THREADPOOL wait type) is often a symptom of large
numbers of concurrent parallel execution plans (since one thread is used per
processor), or it can even indicate that you've reached the performance capacity of
the server and need to buy one with more processors. Either way, you're usually better
off trying to solve the underlying problem rather than overriding the default Max
Worker Threads setting.
Each worker thread requires 2MB of RAM on a 64-bit server and 0.5MB on a 32-bit server, so SQL
Server creates threads only as it needs them, rather than all at once.
The sys.dm_os_workers DMV contains one row for every worker thread, so you can see how many
threads SQL Server currently has by executing the following:
SELECT count(*) FROM sys.dm_os_workers
Schedulers
Each thread has an associated scheduler, which has the function
of scheduling time for each of its threads on a processor. The
number of schedulers available to SQL Server equals the number
of logical processors that SQL Server can use plus an extra one
for the dedicated administrator connection (DAC).
You can view information about SQL Server's schedulers by
querying the sys.dm_os_schedulers DMV.
Figure 1-9 illustrates the relationship between sessions, tasks,
threads, and schedulers.
Windows is a general-purpose OS and is not optimized for
server-based applications, SQL Server in particular. Instead,
the goal of the Windows development team is to ensure that
all applications, written by a wide variety of developers inside
and outside Microsoft, will work correctly and have good
performance. Because Windows needs to work well in a broad
range of scenarios, the development team is not going to do
anything special that would only be used in less than 1% of
applications.
FIGURE 1-9
For example, the scheduling in Windows is very basic to ensure
that it's suitable for a common cause. Optimizing the way that
threads are chosen for execution is always going to be limited because of this broad performance
goal; but if an application does its own scheduling then there is more intelligence about what to
choose next, such as assigning some threads a higher priority or deciding that choosing one thread
for execution will avoid other threads being blocked later.
 
Search WWH ::




Custom Search