Database Reference
In-Depth Information
In addition, it's also possible to enable parallel queries and, at the same time, override the degree of parallelism
defined either by manual degree of parallelism at the segment level or by automatic degree of parallelism with the
following SQL statement:
ALTER SESSION FORCE PARALLEL QUERY PARALLEL 4
Be aware, however, that hints have precedence over the setting at the session level. On one hand, even if parallel
queries are disabled at the session level, hints can enable a parallel execution. The only way to really turn off parallel
queries is to either set the parallel_max_servers initialization parameter to 0 or configure the Resource Manager
to do so. On the other hand, even if a parallel degree is forced at the session level, hints can lead to another degree of
parallelism. To check whether parallel queries are enabled or disabled at the session level, you can execute a query
like the following one (the pq_status column is set to either ENABLED , DISABLED , or FORCED ):
SELECT pq_status
FROM v$session
WHERE sid = sys_context('userenv','sid')
The following execution plan shows an example with a parallel index range scan, a parallel full table scan, and
a parallel hash join. It's based on the px_query.sql script. Notice the hints: the parallel_index hint is used for
the index access, and the parallel hint is used for the table scan. Both hints use the object-level syntax to specify
a degree of parallelism of 2. In addition, the pq_distribute hint is used to specify the distribution method. The
column TQ contains three values, which means that three sets of slave processes are used to carry out this execution
plan. Operation 8 scans index i1 in parallel (this is possible because the index is partitioned). Then, operation 7, with
the rowid extracted from index i1 , accesses table t1. As shown in operation 6, partition granules are used for these
two operations. Then, the data is sent with a hash distribution to the consumers (the slave processes of set Q1,02 ).
When the consumers receive the data (operation 4), they pass it to operation 3 to build the hash table in memory for
the hash join. As soon as all the data of table t1 is fully processed, the parallel full scan of table t2 can start. This is
performed in operation 12. As shown in operation 11, block range granules are used for this operation. Then the data
is sent with a hash distribution to the consumers (the slave processes of the set Q1,02 ). When the consumers receive
data (operation 9), they pass it to operation 3 to probe the hash table. Finally, operation 2 sends the rows fulfilling the
join condition to the query coordinator (Figure 15-10 illustrates this execution plan):
SELECT /*+ leading(t1) use_hash(t2)
index(t1) parallel_index(t1 2)
full(t2) parallel(t2 2)
pq_distribute(t2 hash,hash) */ *
FROM t1, t2
WHERE t1.id > 9000
AND t1.id = t2.id+1
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop | TQ |IN-OUT| PQ Distri |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | | |
| 1 | PX COORDINATOR | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | | | Q1,02 | P->S | QC (RAND) |
|* 3 | HASH JOIN BUFFERED | | | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | | | Q1,02 | PCWP | |
| 5 | PX SEND HASH | :TQ10000 | | | Q1,00 | P->P | HASH |
| 6 | PX PARTITION HASH ALL | | 1 | 4 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS BY INDEX ROW| T1 | | | Q1,00 | PCWP | |
Search WWH ::




Custom Search