Database Reference
In-Depth Information
Figure 13-7. Top timed events for instance in SSKYPOOL1
From the output illustrated in Figure 13-7 , approximately 29% of the waits are related to cluster. Amount of time
spent on “ DB CPU ” is about 24%. All cluster-related wait events and the resmgr: cpu quantum wait event indicate that
the resource manager has been enabled and that it is throttling to get CPU. With all gc events (gc buffer busy acquire,
gc current grant 2-way, gc remaster, gc current grant busy, and so forth) is an indication of contention. DB CPU is not
a reflection of the overall CPU usage, and the overall CPU usage should be investigated. One way to look at the overall
CPU usage is to look at top five wait events on the respective instances in the cluster.
How many CPUs do these servers have? The number of CPUs can be determined either by checking with the
system administrators or using the command grep processor / proc/cpuinfo . These servers are configured with
16 CPUs each; this indicates that the lack of CPUs could not be the reason for the high cluster-related waits.
2.
Check the top waits from the SSKYPOOL2 , which hosts the BIETL service. Using the similar
steps discussed in the previous step, generate the AWR cluster summary report for
instances 6, 7, and 8. Figure 13-8 illustrates the “Top Timed Events.”
Figure 13-8. Top timed events for instances in SSKYPOOL2
Instances in SSKYPOOL2 show a totally different view of the current condition; the cluster-related wait times are
not as high as in Figure 13-7 . However, SSKYPOOL2 shows high log file related wait times.
Out of the box, Figure 13-8 indicates high wait times for log file sync , log file parallel writes , and the
log file switch completion . The reason could be one or all of the following:
Excessive number of logs files being generated.
LGWR performance is poor due to a bad I/O subsystem. The I/O throughput of the disks is not
good enough for such high write activity.
Search WWH ::




Custom Search