Database Reference
In-Depth Information
Evolution of Performance Monitors
TheSixties
The first monitors had two main objectives: billing, on the basis of the consumed
CPU time, and monitoring the hardware load. The monitors were usually a part
of the operating system. The load factors such as CPU busy, channel busy, and
volume busy were measured by sampling.
With regard to tuning, the most useful report was the one providing disk
I/O response time, together with its components broken down by disk volume.
Files were often moved from one volume to another to balance the load across
volumes. In extreme cases, active files were even moved close to each other on
the disk drive in order to reduce the average seek time.
TheSeventies
The first DBMSs were enhanced with trace records that enabled the monitoring
of elapsed time by program. It was now possible to follow trends in the average
local response time and to identify slow programs. IMS, for instance, provided
a monitor report that showed the elapsed time by DL/I call and return code.
Although this report could be quite long, it was easy to find the calls that were
exceptionally slow. The most common reason for this was incorrect sizing of the
root addressable area, which contained root anchor points for the randomizer.
Many programming errors were identified by the use of this report; for instance,
a DL/I call might have been using the wrong index. There were no optimizers
in the prerelational DBMSs.
TheEighties
When the move to relational databases began, tuning focused on the optimizer.
EXPLAIN was the most important tool. Performance courses also emphasized
detailed SQL trace reports, but users soon realized that they could solve almost
all performance problems simply by checking the program counters and response
time components using reports such as the DB2 Accounting Trace.
As the DBMS suppliers added more and more trace records, these per-
formance reports became longer and started to look complicated. At the same
time, applications grew more complex. Reading monitor reports became too time
consuming. Both DBMS suppliers and third parties—companies specializing in
performance monitors and utilities—invested a great deal in more user-friendly
summaries, exception reports, and graphics. Real-time monitors became pop-
ular in many installations because they showed the reasons for gridlocks and
other system-level problems. However, batch reports based on long measurement
periods were found to be the only reliable way to find significant application per-
formance problems. One of the most popular reports provided key average values
by program name or transaction code. Although it could span several hundred
Search WWH ::




Custom Search