Database Reference
In-Depth Information
The database buffer cache is where database blocks are stored temporarily. This is a structure in Oracle's SGA.
As blocks are read, they are stored in this cache, hopefully so we won't have to physically reread them later. The buffer
cache is first and foremost a performance-tuning device. It exists solely to make the very slow process of physical I/O
appear to be much faster than it is. When we modify a block by updating a row on it, these modifications are done in
memory to the blocks in the buffer cache. Enough information to redo, to replay this modification is stored in the redo
log buffer, another SGA data structure. When we COMMIT our modifications, making them permanent, Oracle does not
go to all of the blocks we modified in the SGA and write them to disk. Rather, it just writes the contents of the redo log
buffer out to the online redo logs. As long as that modified block is in the buffer cache and not on disk, we need the
contents of that online redo log in case the database fails. If, at the instant after we committed, the power was turned
off, the database buffer cache would be wiped out.
If this happens, the only record of our change is in that redo log file. Upon restart of the database, Oracle will
actually replay our transaction, modifying the block again in the same way we did and committing it for us. So, as long
as that modified block is cached and not written to disk, we can't reuse (overwrite) that redo log file.
This is where DBWn comes into play. This Oracle background process is responsible for making space in the buffer
cache when it fills up and, more important, for performing checkpoints. A checkpoint is the writing of dirty (modified)
blocks from the buffer cache to disk. Oracle does this in the background for us. Many things can cause a checkpoint to
occur, the most common being a redo log switch.
As we filled up log file 1 and switched to log file 2, Oracle initiated a checkpoint. At this point, DBWn started writing
to disk all of the dirty blocks that are protected by log file group 1. Until DBWn flushes all of these blocks protected by
that log file, Oracle can't reuse (overwrite) it. If we attempt to use it before DBWn has finished its checkpoint, we'll get a
message like this in our database's ALERT log:
...
Thread 1 cannot allocate new log, sequence 66
Checkpoint not complete
Current log# 2 seq# 65 mem# 0: /home/ora12cr1/app/ora12cr1/oradata/orcl/redo01.log
...
So, when this message appeared, processing was suspended in the database while DBWn hurriedly finished its
checkpoint. Oracle gave all the processing power it could to DBWn at that point in the hope it would finish faster.
This is a message you never want to see in a nicely tuned database instance. If you do see it, you know for a
fact that you have introduced artificial, unnecessary waits for your end users. This can always be avoided. The goal
(and this is for the DBA, not the developer necessarily) is to have enough online redo log files allocated so that you
never attempt to reuse a log file before the checkpoint (initiated by the log switch) completes. If you see this message
frequently, it means a DBA has not allocated sufficient online redo logs for the application, or that DBWn needs to be
tuned to work more efficiently.
Different applications will generate different amounts of redo log. A Decision Support System (DSS, query only)
or DW system will naturally generate significantly less online redo logging than an OLTP (transaction processing)
system would, day to day. A system that does a lot of image manipulation in Binary Large Objects (BLOBs) in the
database may generate radically more redo than a simple order-entry system. An order-entry system with 100 users
will probably generate a tenth the amount of redo 1,000 users would generate. Thus, there is no “right” size for your
redo logs, although you do want to ensure they are large enough for your unique workload.
You must take many things into consideration when setting both the size of and the number of online redo logs.
Many of them are beyond the scope of this topic, but I'll list some of them to give you an idea:
Peak workloads : You'd like your system to not have to wait for checkpoint-not-complete
messages, to not get bottlenecked during your peak processing. You should size your redo logs
not for average hourly throughput, but rather for your peak processing. If you generate 24GB
of log per day, but 10GB of that log is generated between 9:00 am and 11:00 am, you'll want to
size your redo logs large enough to carry you through that two-hour peak. Sizing them for an
average of 1GB per hour would probably not be sufficient.
 
Search WWH ::




Custom Search