Database Reference
In-Depth Information
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
...
Parses: 6,259.8 33,385.3
Hard parses: 3,125.6 16,669.7
...
Instance Efficiency Indicators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.99 Optimal W/A Exec %: 100.00
Library Hit %: 60.03 Soft Parse %: 50.07
Execute to Parse %: 0.06 Latch Hit %: 98.41
Parse CPU to Parse Elapsd %: 96.28 % Non-Parse CPU: 15.06
...
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
CPU time 23 32.7
LGWR worker group idle 18 16 876 22.8
heartbeat redo informer 15 15 1005 21.8
lreg timer 5 15 3001 21.7
latch: shared pool 15,076 0 0 .6
What we discover is that the hard parsing goes up a little bit, but the CPU time more than doubles. How could
that be? The answer lies in Oracle's implementation of latching. On this multi-CPU machine, when we could not
immediately get a latch, we spun. The act of spinning itself consumes CPU. Process 1 attempted many times to get
a latch onto the shared pool only to discover that process 2 held that latch, so process 1 had to spin and wait for it
(consuming CPU). The converse would be true for process 2; many times it would find that process 1 was holding
the latch to the resource it needed. So, much of our processing time was spent not doing real work, but waiting for a
resource to become available. If we page down through the Statspack report to the “Latch Sleep Breakdown” report,
we discover the following:
Latch Name Requests Misses Sleeps Gets
-------------------------- --------------- ------------ ----------- -----------
shared pool 2,296,041 75,240 15,267 60,165
Note how the number 15,267 appears in the SLEEPS column here? That number corresponds very closely to the
number of WAITS reported in the preceding “Top 5 Timed Events” report.
the number of sleeps corresponds closely to the number of waits; this might raise an eyebrow. Why not exactly?
the reason is that the act of taking a snapshot is not atomic; a series of queries are executed gathering statistics into
tables during a statspack snapshot, and each query is as of a slightly different point in time. so, the wait event metrics
were gathered at a time slightly before the latching details were.
Note
 
Search WWH ::




Custom Search