Database Reference
In-Depth Information
A FETCH call issued by a batch program may be much less expensive.
The 100
s coefficient is a default value that may be very pessimistic in some
environments, as the following batch measurement shows.
In the example we used earlier, an installation with a large busy server (440
mips per processor) showed a measured TS CPU time of 0 . 7 µ s by scanning a
table and rejecting all the rows. The SELECT was repeated with every row being
accepted ; 13,000,000 rows in 222,000 4K pages, the SELECT list containing only
one column.
The reported CPU time was 115 s, 8 . 8 µ s per FETCH (115 s/13,000,000).
The CPU time for each FETCH was therefore 8 µ s(8 . 8 µ s 0 . 7 µ s). This is
the lower-bound figure; the CPU time becomes longer when many columns are
selected. With a current medium server (250 mips per processor), the lower-
bound figure would be ( 440 mips/250 mips ) × 8 µ s = 14 µ s with only one col-
umn selected.
If readers wish to repeat this sort of measurement on their own platforms,
they should be aware that a part of the FETCH-related CPU time may be reported
on the transaction monitor reports and not on the DBMS monitor reports.
Some products now support a multirow FETCH:
µ
FETCH NEXT ROWSET FROM cursor x FOR 10 ROWS
INTO :hv1, :hv2, ... :hv10
This may reduce the CPU time for the application program interface (API) by
up to 50% if a small number of columns is selected. Thus, the API CPU time
for the 10-row FETCH above could be estimated as 10
×
0
.
1ms
=
1ms/2.
CPU Time per Sorted Row
The sort CPU time appears to be reasonably close to linear as long as memory
pools are large enough to eliminate disk I/O. The platform-specific coefficient is
therefore easy to determine by measuring a SELECT that causes a fairly large
sort, at least 10,000 rows, and then some time later, measuring the same SELECT
without ORDER BY. The rule of thumb is 10 µ s per sorted row, so the expected
difference in CPU time with 10,000 sorted rows would be 100 ms.
CPU ESTIMATION EXAMPLES
The ability to provide estimates of CPU requirements will be found to be useful
in many decision-making processes. Throughout this topic we have made com-
parisons of various sorts based purely on elapsed times. One such example was
the point at which an optimizer may decide to use a table scan as opposed to
an index scan. Based on the QUBE figures of 10 ms per TR and 0.01 ms per
TS, the cut-off point would be a filter factor of 0.1%. The issues discussed at
the beginning of this chapter, such as comebacks and skip-sequential, make this
point much more difficult to ascertain. The CPU cost involved would also be
Search WWH ::




Custom Search