Database Reference
In-Depth Information
there are two other methods Open() and Close() called for each operator during execution. the Open()
method initializes the operator before first the GetRow() call. the Close() method performs clean up at the end of
the execution.
Note
As you probably noticed, there are two kinds of operators. The first group, called non-blocking operators ,
consumes the row from the children and produces the output immediately. The second group, called blocking
operators , must consume all rows from the children before producing the output. In our example, the Sort operator is
the only blocking operator, which consumes all Customers table rows before sorting it.
Even though blocking operators are completely normal and cannot be avoided in many cases, there are a couple
of issues associated with them. The first issue is memory usage. Every operator, blocking or non-blocking, requires
some memory to execute; however, blocking operators can use a large amount of memory when they accumulate and
process rows.
Each query requests a memory grant from SQL Server for the execution. The amount of memory required for
the operators is the most important factor that affects its size. Internally, SQL Server arranges queries among three
different pools based on the amount of memory that they request. Queries in the highest memory-consuming pool
generally wait longer for the memory to be granted.
you can monitor memory grant requests with the sys.dm_exec_query_memory_grants dmV. you can also
monitor various performance counters in the memory manager object in the System performance monitor.
Tip
Correct memory size estimation is very important. Overestimation and underestimation both negatively affect
the system. Overestimation wastes server memory, and it can increase how long a query waits for memory grant.
Underestimation, on the other hand, can force SQL Server to perform sorting or hashing operations in tempdb rather
than in-memory, which is significantly slower.
Memory estimation for an operator depends on the cardinality and average row size estimation. Either error
leads to an incorrect memory grant request. The typical sources of cardinality estimation errors are inaccurate
statistics and/or Query Optimizer model limitations. They can often be addressed by statistics maintenance and
query simplification and optimization. However, dealing with row size estimation errors is a bit trickier.
SQL Server knows the size of the fixed-length data portion of the row. For variable-length columns, however,
it estimates that data is populated on average 50 percent of the defined column size. For example, if you had two
columns defined as varchar(100) and nvarchar(200) , SQL Server would estimate that every data row stores 50 and
200 bytes in those columns respectively. For (n)varchar(max) and varbinary(max) columns, SQL Server uses 4,000
bytes as the base figure.
you can improve row size estimation by defining variable-length columns two times larger than the average size
of the data stored there.
Tip
Let's look at an example and create two tables, as shown in Listing 25-4.
 
 
Search WWH ::




Custom Search