Database Reference
In-Depth Information
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
4 4 4 NESTED LOOPS (cr=7 pr=0 pw=0 time=28 us cost=4 size=228 card=4)
1 1 1 TABLE ACCESS BY INDEX ROWID EMP (cr=3 pr=0 pw=0 time=11 us cost=2...
1 1 1 INDEX UNIQUE SCAN EMP_PK (cr=2 pr=0 pw=0 time=7 us cost=1 size=0...
4 4 4 INDEX RANGE SCAN SYS_IOT_TOP_182459 (cr=4 pr=0 pw=0 time=15 us...
Rows Row Source Operation
------- ---------------------------------------------------
4 NESTED LOOPS (cr=7 pr=3 pw=0 time=9 us cost=4 size=280 card=4)
1 TABLE ACCESS BY INDEX ROWID EMP (cr=3 pr=0 pw=0 time=0 us cost=2 size=30...)
1 INDEX UNIQUE SCAN EMP_PK (cr=2 pr=0 pw=0 time=0 us cost=1 size=0 ...)
4 INDEX RANGE SCAN SYS_IOT_TOP_93124 (cr=4 pr=3 pw=0 time=3 us cost=2 ...)
Both queries fetched exactly the same number of rows, but the HEAP table performed considerably more logical
I/O. As the degree of concurrency on the system goes up, we would likewise expect the CPU used by the HEAP table
to go up more rapidly as well, while the query possibly waits for latches into the buffer cache. Using runstats (a utility
of my own design; refer to the introductory section of this topic “Setting Up Your Environment” for details), we can
measure the difference in latching. On my system, I observed the following
Name Run1 Run2 Diff
STAT...buffer is pinned count 216,342 0 -216,342
STAT...consistent gets 723,461 438,275 -285,186
STAT...consistent gets from ca 723,461 438,275 -285,186
STAT...consistent gets pin (fa 362,888 77,700 -285,188
STAT...consistent gets pin 362,888 77,700 -285,188
STAT...no work - consistent re 362,870 77,682 -285,188
STAT...session logical reads 723,538 438,332 -285,206
STAT...table fetch by rowid 360,570 72,114 -288,456
STAT...buffer is not pinned co 649,026 288,456 -360,570
STAT...session pga memory 393,216 0 -393,216
STAT...session pga memory max 393,216 0 -393,216
LATCH.cache buffers chains 1,091,314 518,788 -572,526
STAT...logical read bytes from 5,927,223,296 3,590,815,744 -2,336,407,552
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
1,235,153 620,581 -614,572 199.03%
where Run1 was the HEAP_ADDRESSES table and Run2 was the IOT_ADDRESSES table. As you can see, there was a
dramatic and repeatable decrease in the latching taking place, mostly due to the cache buffers chains latch (the one
that protects the buffer cache). The IOT in this case would provide the following benefits:
Increased buffer cache efficiency, as any given query needs to have fewer blocks in the cache.
Decreased buffer cache access, which increases scalability.
Less overall work to retrieve our data, as it is faster.
Less physical I/O per query possibly, as fewer distinct blocks are needed for any given query
and a single physical I/O of the addresses most likely retrieves all of them (not just one of
them, as the heap table implementation does).
 
Search WWH ::




Custom Search