Database Reference
In-Depth Information
The last information from the first 100 lines provides time model statistics. As discussed in Chapter 4, based on
this data, you can know the amount of time spent by the database engine processing key operations. In the following
example, the statistics point out that most of the time (95.5%) is spent executing SQL statements:
Statistic Time (s) % DB time
----------------------------------- -------------------- ---------
sql execute elapsed time 6,103.6 95.5
DB CPU 1,701.1 26.6
parse time elapsed 26.8 .4
sequence load elapsed time 0.1 .0
PL/SQL execution elapsed time 0.0 .0
repeated bind elapsed time 0.0 .0
DB time 6,392.9
background elapsed time 107.5
background cpu time 38.8
Based on the information provided in the first 100 lines of the report, you should have a fairly good
understanding of what's going on—for example, to what extent the system is loaded and the main operations it
performs. The next step is to find out what the top SQL statements are. For this purpose, the report contains several
lists ordered by different criteria (CPU, elapsed time, number of logical reads, number of physical reads, number of
executions, and number of parse calls). Since the elapsed time is, in most situations, the most important criteria, I
advise you to continue the analysis by looking at the section called SQL ordered by Elapsed time .
Before looking at the list itself, it's important to check whether the captured SQL statements account for a
relevant part of the DB time. If, as in the following excerpt, the captured SQL statements account for most of the DB
time (95.4%), the list contains useful information:
-> Total DB Time (s): 6,393
-> Captured SQL accounts for 95.4% of Total DB Time
-> SQL reported below exceeded 1.0% of Total DB Time
However, in case the captured SQL statements account for a small percentage of the DB time (for example, 10-20%),
the list is almost useless. In fact, either SQL statements that had a major contribution in the DB time were flushed from
the library cache before the snapshot was taken, or there's no SQL statement that consumed a significant part of the
DB time. In the former case, the repository doesn't contain enough information to completely analyze the situation. In
the latter, many SQL statements are responsible for the load. Hence, as explained in the “Analysis Roadmap” section in
Chapter 4, it doesn't make sense to focus on the top SQL statements.
As illustrated in the following excerpt, for each SQL statement, you can see not only information about the total
and average elapsed time, but also figures about CPU utilization and physical reads:
Elapsed Elap per CPU Old
Time (s) Executions Exec (s) %Total Time (s) Physical Reads Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
1861.91 124,585 0.01 29.1 32.06 194,485 3739063178
Module: Swingbench User Thread
select customer_id, cust_first_name ,cust_last_name ,nls_languag
e ,nls_territory ,credit_limit ,cust_email ,account_mgr_id from
customers where customer_id = :1
Search WWH ::




Custom Search