Database Reference
In-Depth Information
EODA@ORA12CR1> declare
2 l_rec t%rowtype;
3 begin
4 for i in 1 .. 10000
5 loop
6 select * into l_rec from t where id=i;
7 end loop;
8 end;
9 /
PL/SQL procedure successfully completed.
EODA@ORA12CR1> exec dbms_output.put_line( (get_stat_val('redo size')-:redo)
|| ' bytes of redo generated...');
0 bytes of redo generated...
PL/SQL procedure successfully completed.
I see that no redo is generated—the blocks are all clean.
If we were to rerun the preceding example with the buffer cache set to hold a little more than 100,000 blocks, we'd
find that we generate little to no redo on any of the SELECT s—we will not have to clean dirty blocks during either of our
SELECT statements. This is because the10,000-plus (remember the index was modified as well) blocks we modified fit
comfortably into 10 percent of our buffer cache, and we are the only users. There is no one else mucking around with
the data, and no one else is causing our data to be flushed to disk or accessing those blocks. In a live system, it would
be normal for at least some of the blocks to not be cleaned out sometimes.
This behavior will most affect you after a large INSERT (as just demonstrated), UPDATE , or DELETE —one that affects
many blocks in the database (anything more than 10 percent of the size of the cache will definitely do it). You'll notice
that the first query to touch the block after this will generate a little redo and dirty the block, possibly causing it to be
rewritten if DBWn had already flushed it or the instance had been shut down, clearing out the buffer cache altogether.
There is not too much you can do about it. It is normal and to be expected. If Oracle didn't do this deferred cleanout
of a block, a COMMIT could take as long to process as the transaction itself. The COMMIT would have to revisit each and
every block, possibly reading them in from disk again (they could have been flushed).
If you are not aware of block cleanouts and how they work, they will be one of those mysterious things that just
seem to happen for no reason. For example, say you UPDATE a lot of data and COMMIT . Now you run a query against
that data to verify the results. The query appears to generate tons of write I/O and redo. It seems impossible if you
are unaware of block cleanouts; it was to me the first time I saw it. You go and get someone to observe this behavior
with you, but it is not reproducible as the blocks are now “clean” on the second query. You simply write it off as one of
those database mysteries—a mystery that only happens when you are alone.
In an OLTP system, you'll probably never see a block cleanout happening, since those systems are characterized
by small, short transactions that affect only a few blocks. By design, all or most of the transactions are short and sweet.
Modify a couple of blocks and they all get cleaned out. In a warehouse where you make massive UPDATE s to the data
after a load, block cleanouts may be a factor in your design. Some operations will create data on “clean” blocks. For
example, CREATE TABLE AS SELECT , direct-path loaded data, and direct-path inserted (using the /* +APPEND */ hint)
data will all create clean blocks. An UPDATE , normal INSERT , or DELETE may create blocks that need to be cleaned with
the first read. This could really affect you if your processing consists of
Bulk-loading lots of new data into the data warehouse
Running
UPDATE s on all of the data you just loaded (producing blocks that need to be
cleaned out)
Letting people query the data
 
Search WWH ::




Custom Search