Databases Reference
In-Depth Information
allow for calculation execution, even when the storage engine is busy executing other
requests. Threads in this pool are often performing CPU-intensive operations.
Analysis Services uses the process-execution pool to perform storage engine operations,
such as retrieving data from the disk, aggregating data, or loading data into the system.
Threads in this pool are often performing I/O operations or operations involving access to
external systems, such as loading data from relational databases. Therefore, the system
needs more threads in this pool because they are often waiting for external operations to
be completed.
Architecture of a Thread Pool
All types of thread pools have the same architecture and are configured using similar
configuration properties. Analysis Services implements a thread pool using the I/O
Completion Port mechanism, which is provided by the Windows operating system (see
Figure 28.1). It initializes every pool with a minimum number of threads specified by the
server's MinThreads property. Upon Analysis Services startup, these threads are initialized
and attached to the I/O Completion Port, and once initialized the pool becomes ready to
support requests for threads. Each thread pool has three queues to support requests with
different priorities:
.
High-priority queue
.
Normal-priority queue
.
Low-priority queue
When a request for a thread arrives at the pool, it is placed in the queue according to the
priority of the request. Most requests have normal priority and are placed in the normal-
priority queue. If the thread pool has available threads, I/O Completion Port provides
threads to the first request in a queue with highest priority. The I/O Completion Port
mechanism is designed in a way that it supplies a thread only if the number of concur-
rently executed threads being processed by a single CPU is less than a certain value. This
value is defined by the Analysis Services Concurrency property. However, you cannot
assume that the number of threads executed by a single CPU is always less than the value
of the Concurrency property. The reason you cannot make this assumption is because the
I/O Completion Port has information which threads are blocked by I/O operations (such
as reading data from the disk or network) and does not count them as active and allows
creation of new threads. Therefore, the total number of threads in the pool (active and
blocked) exceeds the Concurrency value. When the I/O operation completes and threads
become active again, the number of active threads might exceed the Concurrency value.
Because the system cannot predict such situations, it is not easy to specify an optimal
Concurrency value. If the Concurrency value is too low, the CPU can be underutilized, and
as a result the performance of the execution can be degraded. If the Concurrency value is
too high, the number of threads executing in parallel will be too high, which can lead to
performance degradation resulting from competition for resources and context switches.
After the thread pool is finished serving requests from the highest-priority queue, it starts
to serve the requests from the lower-priority queue. To prevent situations when only the
Search WWH ::




Custom Search