Databases Reference
In-Depth Information
SqlID AvgElasped PhysIOMb CellUncompMB InterconnectMB SQL
------------- ----------- --------- ------------ -------------- -----------------------------
4m958qrfphdqy 10.94 81 0 81 select /* hcc_myobj_compah */
13jh8dfsb5sam 1.86 118 2,080 389 select /* hcc_myobj_compqh*/ m
3mwrbh4cmbf3v 1.80 2,256 2,258 517 select /* hcc_myobj_uncomp */
How It Works
In this recipe, we demonstrated using statistics to measure the amount of I/O physical read from disk ( physical read
total bytes ), the amount of I/O sent over the Exadata storage grid ( cell physical IO interconnect bytes ), and
the amount of uncompressed data processed on the cell ( cell_io_uncompressed_bytes ). The statistics were queried
from V$MYSTAT in Listing 16-4, but you can use V$SESSTAT , V$SYSSTAT , or AWR and ASH views to collect the same
information at a different scope.
On a per SQL ID basis, V$SQL also contains the IO_CELL_UNCOMPRESSED_BYTES , IO_INTERCONNECT_BYTES , and
multiple physical I/O statistics. You can query V$SQL , V$SQLSTATS , or DBA_HIST_SQLSTAT to report on the same
information for a broader scope or historical time periods.
When tables are compressed with HCC, they consume less physical disk space. Less disk space means less
I/O required to access the data from disk and, ideally, better performance. But, as demonstrated, there is often a
performance trade-off between I/O and CPU.
As data is retrieved from HCC tables or partitions, the unit of I/O that Oracle performs is the CU. A single I/O
against an HCC table will read an entire CU, which typically is four or more blocks depending on the compression
type and your data.
The tests in this recipe demonstrated that archive compression yielded a better compression ratio than the
QUERY HIGH compressed table but took more time to execute the query, despite the I/O savings on the storage cells.
This unveils a potential performance “gotcha” for HCC; when data is compressed, upon retrieval it will need to be
uncompressed at some layer in the infrastructure. HCC aims to perform as much decompression as it can on the
storage cells and utilize the ample number of processors to do so, but even so, the higher the compression ratio, the
more work required to uncompress the data. Exadata's choice on where to decompress data (compute servers or
storage cells) is dependent on the amount of data requested and the access method used to retrieve the data. Single-
row lookups via index scans will pass uncompressed data to the compute nodes. These are important considerations
when designing your HCC strategy.
recipe 16-9 discusses compression and decompression behavior in detail and provides you with information to
understand and measure the performance impact of HCC decompression.
Note
One other important point from the tests provided in this recipe has to do with cell offload and Smart Scan.
When using HCC, your table or partition segments will likely be smaller in size. The fewer blocks your segments
have, the less likely it is that your scans will qualify for direct path reads. Direct path reads are required for smart
scans and, if disabled, not only will your storage cells send more data to the compute grid and bypass Oracle's cell
offload functionality, but the CUs will be shipped in compressed format. This means that your database servers will be
responsible for decompression.
Please see recipe 15-7 to learn more about direct path reads and their impact on Smart Scan and recipe 16-9
to learn more about HCC decompression.
Note
 
 
Search WWH ::




Custom Search