Database Reference
In-Depth Information
Make checkpointing happen more frequently and more continuously . Use a smaller block buffer
cache (not entirely desirable) or various parameter settings such as FAST_START_MTTR_TARGET ,
LOG_CHECKPOINT_INTERVAL , and LOG_CHECKPOINT_TIMEOUT . This will force DBWn to flush dirty
blocks more frequently. The benefit to this approach is that recovery time from a failure is
reduced. There will always be less work in the online redo logs to be applied. The downside
is that blocks may be written to disk more frequently if they are modified often. The buffer
cache will not be as effective as it could be, and it can defeat the block cleanout mechanism
discussed in the next section.
The approach you take will depend on your circumstances. This is something that must be fixed at the database
level, taking the entire instance into consideration.
Block Cleanout
In this section, we'll discuss block cleanouts , or the removal of “locking”-related information on the database blocks
we've modified. This concept is important to understand when we talk about the infamous ORA-01555: snapshot
too old error in a subsequent section.
If you recall from Chapter 6, we talked about data locks and how they are managed. I described how they are
actually attributes of the data, stored on the block header. A side effect of this is that the next time that block is
accessed, we may have to clean it out—in other words, remove the transaction information. This action generates
redo and causes the block to become dirty if it wasn't already, meaning that a simple SELECT may generate redo and
may cause lots of blocks to be written to disk with the next checkpoint. Under most normal circumstances, however,
this will not happen. If you have mostly small- to medium-sized transactions (OLTP), or you have a data warehouse
that performs direct-path loads or uses DBMS_STATS to analyze tables after load operations, you'll find the blocks are
generally cleaned for you. If you recall from the earlier section titled “What Does a COMMIT Do?” one of the steps of
COMMIT -time processing is to revisit some blocks if they are still in the SGA and if they are accessible (no one else is
modifying them), and then clean them out. This activity is known as a commit clean out and is the activity that cleans
out the transaction information on our modified block. Optimally, our COMMIT can clean out the blocks so that a
subsequent SELECT (read) will not have to clean it out. Only an UPDATE of this block would truly clean out our residual
transaction information, and since the UPDATE is already generating redo, the cleanout is not noticeable.
We can force a cleanout to not happen, and therefore observe its side effects, by understanding how the commit
cleanout works. In a commit list associated with our transaction, Oracle will record lists of blocks we have modified.
Each of these lists is 20 blocks long, and Oracle will allocate as many of these lists as it needs—up to a point. If the sum
of the blocks we modify exceeds 10 percent of the block buffer cache size, Oracle will stop allocating new lists. For
example, if our buffer cache is set to cache 3,000 blocks, Oracle will maintain a list of up to 300 blocks (10 percent of
3,000). Upon COMMIT , Oracle will process each of these lists of 20 block pointers, and if the block is still available, it will
perform a fast cleanout. So, as long as the number of blocks we modify does not exceed 10 percent of the number of
blocks in the cache and our blocks are still in the cache and available to us, Oracle will clean them out upon COMMIT .
Otherwise, it just skips them (i.e., does not clean them out).
With this understanding, we can set up artificial conditions to see how the cleanout works. I set my DB_CACHE_SIZE
to a low value of 16MB, which is sufficient to hold 2,048 8KB blocks (my blocksize is 8KB). Next I create a table such
that a row fits on exactly one block—I'll never have two rows per block. Then I fill this table up with 10,000 rows and
COMMIT . We know that 10,000 blocks far exceeds 10% of 2048, so the database will not be able to clean out all of these dirty
blocks upon commit—most of them will not even be in the buffer cache anymore. I'll measure the amount of redo I've
generated so far, run a SELECT that will visit each block, and then measure the amount of redo that SELECT generated.
In order for this example to be reproducible and predictable, you'll need to disable SGa automatic
memory management. If that is enabled, there is a chance that the database will increase the size of your buffer
cache—defeating the “math” I've worked out.
Note
 
 
Search WWH ::




Custom Search