Database Reference
In-Depth Information
The query against the disorganized table bears out the simple math we did earlier: we did 20,000+ logical I/Os
(100,000 total blocks queried and five runs of the query). We processed each and every block 20 times! On the other
hand, the physically COLOCATED data took the logical I/Os way down. Here is the perfect illustration of why rules of thumb
are so hard to provide—in one case, using the index works great, and in the other it doesn't. Consider this the next time
you dump data from your production system and load it into development, as it may very well provide at least part of the
answer to the question, “Why is it running differently on this machine—aren't they identical?” They are not identical.
recall from Chapter 6 that increased logical I/O is the tip of the iceberg here. each logical I/O involves one or
more latches into the buffer cache. In a multiuser/CpU situation, the CpU used by the second query would have un-
doubtedly gone up many times faster than the first as we spin and wait for latches. the second example query not only
performs more work, but also will not scale as well as the first.
Note
the eFFeCt OF arraYSIZe ON LOGICaL I/O
It is interesting to note the effect of the ARRAYSIZE on logical I/O performed. ARRAYSIZE is the number of rows
Oracle returns to a client when they ask for the next row. the client will then buffer these rows and use them
before asking the database for the next set of rows. the ARRAYSIZE may have a very material effect on the logical
I/O performed by a query, resulting from the fact that if you have to access the same block over and over again
across calls (across fetch calls specifically, in this case) to the database, Oracle must retrieve that block again
from the buffer cache. therefore, if you ask for 100 rows from the database in a single call, Oracle might be able
to fully process a database block and not need to retrieve that block again. If you ask for 15 rows at a time, Oracle
might well have to get the same block over and over again to retrieve the same set of rows.
In the example earlier in this section, we were using the sQL*plus default array fetch size of 15 rows (if you divide
the total rows fetched (100005) by the number of fetch calls (6675), the result is very close to 15). If we were
to compare the execution of the previous queries using 15 rows per fetch versus 100 rows per fetch, we would
observe the following for the COLOCATED table:
select * from colocated a15 where x between 20000 and 40000
Rows Row Source Operation
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
20001 20001 20001 TABLE ACCESS BY INDEX ROWID BATCHED COLOCATED
(cr=2899 pr=0 pw=0 ...
20001 20001 20001 INDEX RANGE SCAN COLOCATED_PK
(cr=1374 pr=0 pw=0 ...
select * from colocated a100 where x between 20000 and 40000
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
20001 20001 20001 TABLE ACCESS BY INDEX ROWID BATCHED COLOCATED
(cr=684 pr=0 pw=0 ...
20001 20001 20001 INDEX RANGE SCAN COLOCATED_PK
(cr=245 pr=0 pw=0 ...
 
 
Search WWH ::




Custom Search