Database Reference
In-Depth Information
| 9 | PX RECEIVE | | | | Q1,02 | PCWP | |
| 10 | PX SEND HASH | :TQ10001 | | | Q1,01 | P->P | HASH |
| 11 | PX BLOCK ITERATOR | | | | Q1,01 | PCWC | |
|*12 | TABLE ACCESS FULL | T2 | | | Q1,01 | PCWP | |
----------------------------------------------------------------------------------------------
3 - access("T1"."ID"="T2"."ID"+1)
8 - access("T1"."ID">9000)
12 - filter("T2"."ID"+1>9000)
Figure 15-10. Even though three sets of slave processes are shown, a single data flow operation can't be executed with
more than two sets of slave processes
According to the execution plan and Figure 15-10 , one data flow operation and three table queues are used. Note
that even though Figure 15-10 shows three sets of slave processes (since the requested degree of parallelism is 2, six
slave processes in total), during the execution, only two sets are allocated from the pool (in other words, four slave
processes). This is because a single data flow operation can't use more than two sets of slave processes. What happens
in this particular case is that the set used for scanning table t1 ( Q1,00 ) never works concurrently with the set used for
scanning table t2 ( Q1,01 ). Therefore, the query coordinator simply (re)uses the same slave processes for these
two sets.
The HASH JOIN BUFFERED operation (operation 3, in the previous execution plan) not only creates a hash
table containing the data returned by the build input (operations 5 through 8), but also buffers the data returned by
the probe input (operations 10 through 12) that fulfills the join condition. hence, the suffix BUFFERED . This is a special
behavior that the database engine has to implement because of an internal limitation (two distribution operations can't be
active at the same time). From a performance point of view, the buffering might be a major issue.
Caution
 
Search WWH ::




Custom Search