Database Reference
In-Depth Information
...
93
7
...
96
43 rows selected.
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart|Pstop |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 43 | 172 | 4 (0)| 00:00:01 | | |
| 1 | PARTITION HASH ALL| | 43 | 172 | 4 (0)| 00:00:01 | 1 | 4 |
|* 2 | INDEX RANGE SCAN | T_IDX | 43 | 172 | 4 (0)| 00:00:01 | 1 | 4 |
-----------------------------------------------------------------------------------------
EODA@ORA12CR1> set autotrace off
So, even though Oracle used the index in a range scan, the data is obviously not sorted. In fact, you might observe
a pattern in this data. there are four sorted results here: the ... replaces values that were increasing in value;
and between the rows with USER_ID = 13 and 97 , the values were increasing in the output. then the row with
USER_ID = 22 appeared. What we are observing is Oracle returning “sorted data” from each of the four hash
partitions, one after the other.
this is just a warning that unless your query has an ORDER BY , you have no reason to anticipate the data being
returned to you in any kind of sorted order whatsoever. (and no, GROUP BY doesn't have to sort either! there is no
substitute for ORDER BY .)
Does that mean partitioning won't affect OLTP performance at all in a positive sense? No, not entirely—you
just have to look in a different place. In general, it will not positively impact the performance of your data retrieval
in OLTP; rather, care has to be taken to ensure data retrieval isn't affected negatively. But on data modification,
partitioning may provide salient benefits in highly concurrent environments.
Consider the preceding a rather simple example of a single table with a single index, and add into the mix a
primary key. Without partitioning, there is a single table: all insertions go into this single table. There is contention
perhaps for the freelists on this table. Additionally, the primary key index that would be on the OBJECT_ID column
would be a heavy right-hand-side index, as we discussed in Chapter 11. Presumably it would be populated by a
sequence; hence, all inserts would go after the rightmost block leading to buffer busy waits. Also, there would be a
single index structure T_IDX that people would be contending for. So far, a lot of single items.
Enter partitioning. You hash partition the table by OBJECT_ID into 16 partitions. There are now 16 tables to
contend for, and each table has one-sixteenth the number of users hitting it simultaneously. You locally partition the
primary key index on OBJECT_ID into 16 partitions. You now have 16 right-hand sides, and each index structure will
receive one-sixteenth the workload it had before. And so on. That is, you can use partitioning in a highly concurrent
environment to reduce contention, much like we used a reverse key index in Chapter 11 to reduce the buffer busy
waits. However, you must be aware that the very process of partitioning out the data consumes more CPU itself than
not having partitioning. That is, it takes more CPU to figure out where to put the data than it would if the data had but
one place to go.
So, as with everything, before applying partitioning to a system to increase performance, make sure you
understand what that system needs . If your system is currently CPU bound, but that CPU usage is not due to
contention and latch waits, introducing partitioning could make the problem worse, not better!
 
Search WWH ::




Custom Search