Database Reference
In-Depth Information
This behavior will most affect you after a large INSERT (as just demonstrated), UPDATE , or DELETE —one that affects
many blocks in the database (anything more than 10 percent of the size of the cache will definitely do it). You'll notice
that the first query to touch the block after this will generate a little redo and dirty the block, possibly causing it to be
rewritten if DBWn had already flushed it or the instance had been shut down, clearing out the buffer cache altogether.
There is not too much you can do about it. It is normal and to be expected. If Oracle didn't do this deferred cleanout
of a block, a COMMIT could take as long to process as the transaction itself. The COMMIT would have to revisit each and
every block, possibly reading them in from disk again (they could have been flushed).
If you are not aware of block cleanouts and how they work, they will be one of those mysterious things that just
seem to happen for no reason. For example, say you UPDATE a lot of data and COMMIT . Now you run a query against
that data to verify the results. The query appears to generate tons of write I/O and redo. It seems impossible if you
are unaware of block cleanouts; it was to me the first time I saw it. You go and get someone to observe this behavior
with you, but it is not reproducible as the blocks are now “clean” on the second query. You simply write it off as one of
those database mysteries—a mystery that only happens when you are alone.
In an OLTP system, you'll probably never see a block cleanout happening, since those systems are characterized
by small, short transactions that affect only a few blocks. By design, all or most of the transactions are short and sweet.
Modify a couple of blocks and they all get cleaned out. In a warehouse where you make massive UPDATE s to the data
after a load, block cleanouts may be a factor in your design. Some operations will create data on “clean” blocks. For
example, CREATE TABLE AS SELECT , direct-path loaded data, and direct-path inserted (using the /* +APPEND */ hint)
data will all create clean blocks. An UPDATE , normal INSERT , or DELETE may create blocks that need to be cleaned with
the first read. This could really affect you if your processing consists of
Bulk-loading lots of new data into the data warehouse
Running
UPDATE s on all of the data you just loaded (producing blocks that need to be cleaned out)
Letting people query the data
You have to realize that the first query to touch the data will incur some additional processing if the block needs
to be cleaned. Realizing this, you yourself should “touch” the data after the UPDATE . You just loaded or modified a ton
of data—you need to analyze it at the very least. Perhaps you need to run some reports to validate the load. This will
clean the block out and make it so the next query doesn't have to. Better yet, since you just bulk-loaded the data, you
now need to refresh the statistics anyway. Running the DBMS_STATS utility to gather statistics may well clean out all of
the blocks as it just uses SQL to query the information and would naturally clean the blocks out as it goes along.
Log Contention
This, like the cannot allocate new log message, is something the DBA must fix, typically in conjunction with the
system administrator. However, it is something a developer might detect as well if the DBA isn't watching closely enough.
If you are faced with log contention, what you might observe is a large wait time on the “log file sync” event
and long write times evidenced in the “log file parallel write” event in a Statspack report. If you see this, you may be
experiencing contention on the redo logs; they are not being written fast enough. This can happen for many reasons.
One application reason (one the DBA can't fix, but the developer must) is that you are committing too frequently—
committing inside of a loop doing INSERT s, for example. As demonstrated in the “What Does a COMMIT Do?” section,
committing too frequently, aside from being a bad programming practice, is a surefire way to introduce lots of log file
sync waits. Assuming all of your transactions are correctly sized (you are not committing more frequently than your
business rules dictate), the most common causes for log file waits that I've seen are as follows:
Putting redo on a slow device : The disks are just performing poorly. It is time to buy faster disks.
Putting redo on the same device as other files that are accessed frequently : Redo is designed to
be written with sequential writes and to be on dedicated devices. If other components of your
system—even other Oracle components—are attempting to read and write to this device at the
same time as LGWR , you will experience some degree of contention. Here, you want to ensure
LGWR has exclusive access to these devices if at all possible.
 
Search WWH ::




Custom Search