Database Reference
In-Depth Information
It did 10,004 I/Os that time—a marked difference. So, where did all of the I/O come from? That was Oracle rolling
back the changes made to that database block. When we ran the second query, Oracle knew that all of the blocks
retrieved and processed by that query had to be “as of ” the start time of the transaction. When we got to the buffer
cache, we discovered that the block in the cache was simply “too new”—the other session had modified it some 10,000
times. Our query could not see those changes, so it started walking the undo information and undid the last change.
It discovered this rolled back block was still too new and did another rollback of the block. It did this repeatedly until
finally it found the version of the block that was committed in the database when our transaction began. That was the
block we may use—and did use.
Interestingly, if you were to rerun the SELECT * FROM T , you would likely see the I/o go back down to 7 or so
again; it would not be 10,004. The reason? oracle has the ability to store multiple versions of the same block in the buffer
cache. When you undid the changes to this block for the query that did 10,004 Ios, you left that version in the cache, and
subsequent executions of your query are able to access it.
Note
So, do we only encounter this problem when using the SERIALIZABLE isolation level? No, not at all. Consider a
query that runs for five minutes. During the five minutes the query is running, it is retrieving blocks from the buffer
cache. Every time it retrieves a block from the buffer cache, it will perform this check: “Is the block too new? If so, roll
it back.” And remember, the longer the query runs, the higher the chance that a block it needs has been modified
over time.
Now, the database is expecting this check to happen (i.e., to see if a block is “too new” and the subsequent rolling
back of the changes), and for just such a reason, the buffer cache may actually contain multiple versions of the same
block in memory. In that fashion, chances are that a version you require will be there, ready and waiting to go, instead of
having to be materialized using the undo information. A query such as the following may be used to view these blocks:
select file#, block#, count(*)
from v$bh
group by file#, block#
having count(*) > 3
order by 3
/
In general, you will find no more than about six versions of a block in the cache at any point in time, but these
versions can be used by any query that needs them.
It is generally these small hot tables that run into the issue of inflated I/Os due to read consistency. Other queries
most often affected by this issue are long-running queries against volatile tables. The longer they run, the longer they
run, because over time they may have to perform more work to retrieve a block from the buffer cache.
Write Consistency
So far, we've looked at read consistency: Oracle's ability to use undo information to provide nonblocking query and
consistent (correct) reads. We understand that as Oracle reads blocks for queries out of the buffer cache, it will ensure
that the version of the block is “old” enough to be seen by that query.
But that begs the following questions: What about writes/modifications? What happens when you run an UPDATE
statement, as follows, and while that statement is running, someone updates a row it has yet to read from Y=5 to Y=6
and commits?
Update t Set x = 2 Where y = 5;
 
 
Search WWH ::




Custom Search