Database Reference
In-Depth Information
￿
Code tracing—receiving informative messages about the execution of an application at
run time. […]
￿
Performance counters—components that allow you to track the performance of the
application.
￿
Event logs—components that allow you to receive and track major events in the execu-
tion of the application.” 2
In a nutshell, instrumentation enables software to measure its own performance. The
ORACLE DBMS is well instrumented. It maintains hundreds of counters and timers that repre-
sent the workload executed as well as the performance of SQL statements, memory, file, and
network access. Measurements are available at instance level, session level, and SQL or PL/SQL
statement level. In 1999 Anjo Kolk, Shari Yamaguchi, and Jim Viscusi of Oracle Corporation, in
their acclaimed paper Yet Another Performance Profiling Method (or YAPP-Method) , proposed
the following formula:
Response Time = Service Time + Wait Time
Even though the paper was lacking a snapshot-based approach to measuring performance 3
and stated that the wait event SQL*Net message from client should be ignored at session level—
a mistake that limits the explanatory power of performance diagnoses still seen in recent topics 4
it was a milestone towards a new tuning paradigm. Put simply, service time is the CPU time
consumed and wait time is the time spent waiting for one of several hundred wait events related to
disk access, synchronization, or network latency (see appendix Oracle Wait Events in Oracle
Database Reference and the dynamic performance view V$EVENT_NAME ). Instrumentation of the
database server provides these measurements. Instrumentation itself has a certain impact on
performance termed measurement intrusion. Basically, an extended SQL trace file is a micro-
second-by-microsecond account of a database session's elapsed time. In practice, some code
paths of the ORACLE DBMS are less instrumented than others and thus a SQL trace file does
not account for the entire elapsed time of a database session.
The response time perceived by an end user is affected by additional factors such as
network latency or processing in intermediate tiers such as application servers. Clearly, the
database server has its own perspective on response time, which is different from that of an
application server, which is still different from that of the end user. For example, instrumenta-
tion in Oracle10 g introduced time stamps for wait events ( WAIT entries) in extended SQL trace
files. Formerly, just the database calls parse, execute, and fetch were tagged with timestamps.
From the database server's perspective, the response time of a SQL statement comprises the
interval between the arrival of the statement at the database server and the response sent to the
client. The former point in time is marked by the wait event SQL*Net message from client and
the latter by the wait event SQL*Net message to client . Due to network latency, the client will
2. http://en.wikipedia.org/wiki/Instrumentation_%28computer_programming%29
3.
Both Statspack and AWR implement a snapshot-based approach to capturing performance data.
Figures since instance or session startup are not snapshot-based.
4.
Please see Chapter 27 for information on the relevance of S QL*Net message from client and how to
derive think time from this wait event.
 
Search WWH ::




Custom Search