Databases Reference
In-Depth Information
SQL> select count(object_name) from d14.myobj_compah;
------------------
20423000
Statistic Value
---------------------------------------------------------------------- ----------------
CPU used by this session 142
... Output omitted for brevity
cell CUs processed for uncompressed 1254
cell CUs sent uncompressed 735
cell IO uncompressed bytes 3600680935
cell physical IO bytes eligible for predicate offload 84197376
cell physical IO interconnect bytes 370107624
physical read total bytes 84221952
Note from this example that the number of bytes sent over the storage interconnect exceeded the bytes eligible
for predicate offload and the physical read total bytes ; this is because 735 CUs were transmitted uncompressed.
How It Works
With HCC, compression operations always occur on the compute nodes as data is inserted via direct path load/insert.
Decompression can take place on either the storage cells or the compute servers, depending on the access method,
volume of data being returned, rows and columns being retrieved, and so forth. In general, the following statements
are true:
When smart scans are used to access HCC segments, decompression occurs for the selected
rows and columns on the storage cells.
When smart scans are
not used, decompression takes place on the compute nodes; entire
compression units (think in terms of multiple blocks and many rows-per-block) are sent over
the storage interconnect, loaded into the database buffer cache, and uncompressed.
This second bullet implies that index access to HCC segments means that decompression will take place on the
compute nodes, and there is also a 1 MB boundary upon which Exadata will choose to decompress on the storage cell.
If the amount of data is greater than 1 MB and smart scans are used, decompression takes place on the storage cells.
Any I/O requests smaller than 1 MB in size cause the storage cells to ship compressed CUs to the compute node.
The script used in the solution of this recipe queries several Exadata and HCC-specific performance statistics,
including CPU usage, interconnect and physical I/O, cell I/O, and cell CU-related statistics. Together, these statistics
can help paint a picture of where decompression is taking place and how much it is costing your database tier CPUs.
The script in Listing 16-13 can certainly be expanded to report on systemwide HCC information by using V$SYSSTAT ,
sessionwide by using V$SESSTAT , as well various AWR views.
Why is it important to know where HCC decompression is taking place? The numbers in Table 16-3 tell the story;
decompression is expensive from a CPU perspective, and decompressing HCC data on the database servers can be
costly, cause performance issues, or create scalability challenges. Oracle software licensing on the compute servers
costs you more than three times as much as the processor licenses on the storage cells—keep this in mind as you
begin deploying HCC for your databases. If your HCC tables or partitions will be queried, it is best to do so using smart
scans and, as covered in Chapter 15, smart scans require that the compressed form of your segments be large.
Note
To learn more about Exadata Smart Scan, please see recipes in Chapter 15.
 
 
Search WWH ::




Custom Search