Databases Reference
In-Depth Information
SQL> select num_rows, blocks from dba_tables
where table_name='MYOBJ_UNCOMP';
NUM_ROWS BLOCKS
---------- ----------
20423000 289446
SQL>
Using this uncompressed table as the baseline, we will use the compression advisor to estimate the compression
ratio, create HCC and OLTP compressed tables using CREATE TABLE AS with the COMPRESS FOR clause, compare the
actual compression ratio and block counts to the compression advisor's output, and capture the time it takes to create
our compressed tables. Table 16-1 provides a summary of the findings on an Exadata X2-2 Quarter Rack.
Table 16-1. Compression Comparisons
Table
Compression Type
Advisor Ratio
Actual Ratio/Blks
Create Time
MYOBJ_UNCOMP
Uncompressed
N/A
Ratio: 1 to 1
Blks: 289,446
N/A
MYOBJ_COMPQL
Warehouse Query Low
8.6 to 1
Ratio: 8.59 to 1
Blks: 33,821
20 seconds
MYOBJ_COMPQH
Warehouse Query High
19.6 to 1
Ratio: 19.06 to 1
Blks: 15,188
35 seconds
MYOBJ_COMPAL
Archive Low
19.7 to 1
Ratio: 19.22 to 1
Blks: 15,058
36 seconds
MYOBJ_COMPAH
Archive High
31.5 to 1
Ratio: 27.76 to 1
Blks: 10,426
225 seconds
MYOBJ_COMPOLTP
OLTP compression
2.9 to 1
Ratio: 2.9 to 1
Blks: 97,490
31 seconds
Please see recipe 16-1 to learn about Oracle's compression advisor and recipe 16-2 to learn how to build
compressed tables using CREATE TABLE AS SELECT .
Note
As you can see in Table 16-1 , the actual compression ratios closely matched the estimated compression ratios
generated with the compression advisor. This is not surprising, based on how the compression advisor executes,
as described in Recipe 16-1. The time required to build the compressed tables is generally relative to the type of
compression being performed; the higher compression level, the more CPU resources are required to compress the
data and the longer the build operations takes.
In our example, notice the nonlinear compression ratios and build times as we go from Query High to Archive
Low compression; in this case, the fact that Archive Low compression only yielded a marginally better compression
ratio and relatively the same amount of time to build is related to the distribution of data in our source table and the
degree of duplicity of column values in the table.
How It Works
When data is inserted into HCC segments, Oracle examines the input stream of data from the direct path insert/load
operation and divides this stream into arrays of columns. It then employs a compression algorithm, based on the HCC
 
 
Search WWH ::




Custom Search