Database Reference
In-Depth Information
-----------------------------------
| Id | Operation | Name |
-----------------------------------
| 0 | SELECT STATEMENT | |
|* 1 | HASH JOIN | |
|* 2 | TABLE ACCESS FULL| T1 |
| 3 | TABLE ACCESS FULL| T2 |
-----------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."ID"="T2"."ID")
2 - filter("T1"."N"=666)
Note
-----
- this is an adaptive plan
Recognizing Inefficient Execution Plans
The sad truth is, the only way to be sure that an execution plan isn't the most efficient one is to find another one
that's better. Nevertheless, simple checks might reveal clues that suggest an inefficient execution plan. The following
sections describe two checks that I use for that purpose. In Chapter 13, another check used for assessing the efficiency
of access paths is introduced.
Wrong Estimations
The idea behind this check is very simple. The query optimizer computes costs to decide which access paths, join
orders, and join methods should be used to get an efficient execution plan. If the computation of the cost is wrong, it's
likely that the query optimizer picks out a suboptimal execution plan. In other words, wrong estimations easily lead to
making a mistake in the choice of an execution plan.
Judging the cost directly of a SQL statement itself isn't feasible in practice. It's much easier to check other
estimations performed by the query optimizer, which the computation of the cost is based on: the estimated number
of rows returned by an operation (the cardinality). Checking the estimated cardinality is quite easy because you
can, with the display_cursor function in the dbms_xplan package, for example, directly compare it with the actual
cardinality. As you've just seen, only if the two cardinalities are close did the query optimizer do a good job. One of
the central characteristics of this method is that no information about the SQL statement or the database structure is
necessary to judge the quality of the execution plan. You simply concentrate on comparing the estimations with the
actual data.
Let me illustrate this concept with an example. The following excerpt of the output produced by the
wrong_estimations.sql script shows an execution plan with its estimated ( E-Rows ) and actual ( A-Rows ) cardinalities.
As you can see, the estimation of operation 4 (and consequently of operations 2 and 3) is completely wrong. The query
optimizer estimated, for operation 4, a return of only 32 rows instead of 80,016. To make things worse, operations 2
and 3 are related-combine operations. This means that operations 6 and 7, instead of being executed only 32 times
as estimated, are in fact executed 80,016 and 75,808 times, respectively. This is confirmed by the values in the Starts
column. It's important to note that the estimations for operations 6 and 7 are correct. In fact, before making the
comparison, the actual cardinality ( A-Rows ) must be divided by the number of executions ( Starts ):
SELECT count(t2.col2)
 
Search WWH ::




Custom Search