Database Reference
In-Depth Information
I get the error. I should point out that I added an index hint to the query and a WHERE clause to make sure I was
reading the table randomly (together, they caused the cost-based optimizer to read the table “sorted” by the index key).
When we process a table via an index, we tend to read a block for a single row, and then the next row we want will be
on a different block. Ultimately, we will process all of the rows on block 1, just not all at the same time. Block 1 might
hold, say, the data for all rows with OBJECT_NAME s starting with the letters A, M, N, Q, and Z. So we would hit the block
many times, since we are reading the data sorted by OBJECT_NAME and presumably many OBJECT_NAME s start with
letters between A and M. Since we are committing frequently and reusing undo space, we eventually revisit a block
where we can simply no longer roll back to the point our query began, and at that point we get the error.
This was a very artificial example just to show how it happens in a reliable manner. My UPDATE statement was
generating undo. I had a very small undo tablespace to play with (10MB). I wrapped around in my undo segments
many times, since they are used in a circular fashion. Every time I committed, I allowed Oracle to overwrite the undo
data I generated. Eventually, I needed some piece of data I had generated, but it no longer existed and I received the
ORA-01555 error.
You would be right to point out that in this case, if I had not committed on line 10, I would have received the
following error:
begin
*
ERROR at line 1:
ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDO_SMALL'
ORA-06512: at line 6
The major differences between the two errors are as follows:
The
ORA-01555 example left my update in a totally unknown state . Some of the work had been
done; some had not.
nothing I can do to avoid the ORA-01555 error , given that I committed in
the cursor FOR loop.
The ORA-30036 error can be avoided by allocating appropriate resources in the system. This
error is avoidable by correct sizing; the first error is not. Further, even if I don't avoid this error,
at least the update is rolled back and the database is left in a known, consistent state—not
halfway through some large update.
There is absolutely
The bottom line here is that you can't “save” on undo space by committing frequently—you need that undo. I was in
a single-user system when I received the ORA-01555 error . It takes only one session to cause that error, and many times
even in real life it is a single session causing its own ORA-01555 errors. Developers and DBAs need to work together to size
these segments adequately for the jobs that need to be done. There can be no short-changing here. You must discover,
through analysis of your system, what your biggest transactions are and size appropriately for them. The dynamic
performance view V$UNDOSTAT can be very useful to monitor the amount of undo you are generating and the duration of
your longest running queries. Many people consider things like temp, undo, and redo as overhead—things to allocate as
little storage to as possible. This is reminiscent of a problem the computer industry had on January 1, 2000, which was
all caused by trying to save 2 bytes in a date field. These components of the database are not overhead, but rather are key
components of the system. They must be sized appropriately (not too big and not too small).
speaking of UNDO segments being too small, make sure to set your undo tablespace back to your regular one
after running these examples, otherwise you'll be hitting ORA-30036 errors for the rest of the topic!
Note
 
Search WWH ::




Custom Search