Database Reference
In-Depth Information
Interestingly, if you were to rerun the SELECT * FROM T , you would likely see the I/o go back down to 7 or so
again; it would not be 10,004. The reason? oracle has the ability to store multiple versions of the same block in the buffer
cache. When you undid the changes to this block for the query that did 10,004 Ios, you left that version in the cache, and
subsequent executions of your query are able to access it.
Note
So, do we only encounter this problem when using the SERIALIZABLE isolation level? No, not at all. Consider a
query that runs for five minutes. During the five minutes the query is running, it is retrieving blocks from the buffer
cache. Every time it retrieves a block from the buffer cache, it will perform this check: “Is the block too new? If so,
roll it back.” And remember, the longer the query runs, the higher the chance that a block it needs has been modified
over time.
Now, the database is expecting this check to happen (i.e., to see if a block is “too new” and the subsequent rolling
back of the changes), and for just such a reason, the buffer cache may actually contain multiple versions of the same
block in memory. In that fashion, chances are that a version you require will be there, ready and waiting to go, instead of
having to be materialized using the undo information. A query such as the following may be used to view these blocks:
select file#, block#, count(*)
from v$bh
group by file#, block#
having count(*) > 3
order by 3
/
In general, you will find no more than about six versions of a block in the cache at any point in time, but these
versions can be used by any query that needs them.
It is generally these small hot tables that run into the issue of inflated I/Os due to read consistency. Other queries
most often affected by this issue are long-running queries against volatile tables. The longer they run, the longer they
run, because over time they may have to perform more work to retrieve a block from the buffer cache.
Write Consistency
So far, we've looked at read consistency: Oracle's ability to use undo information to provide nonblocking query and
consistent (correct) reads. We understand that as Oracle reads blocks for queries out of the buffer cache, it will ensure
that the version of the block is “old” enough to be seen by that query.
But that begs the following questions: What about writes/modifications? What happens when you run an UPDATE
statement, as follows, and while that statement is running, someone updates a row it has yet to read from Y=5 to Y=6
and commits?
Update t Set x = 2 Where y = 5;
That is, when your UPDATE began, some row had the value Y=5 . As your UPDATE reads the table using consistent
reads, it sees that the row was Y=5 when the UPDATE began. But, the current value for Y is now 6 (it's not 5 anymore)
and before updating the value of X , Oracle will check to see that Y is still 5. Now what happens? How are the updates
affected by this?
Obviously, we can't modify an old version of a block; when we go to modify a row, we must modify the current
version of that block. Additionally, Oracle can't just simply skip this row, as that would be an inconsistent read and
unpredictable. What we'll discover is that in such cases, Oracle will restart the write modification from scratch.
 
 
Search WWH ::




Custom Search