Database Reference
In-Depth Information
17 commit;
18 exit when c%notfound;
19 end loop;
20 close c;
21 end;
22 /
PL/SQL procedure successfully completed.
EODA@ORA12CR1> exec dbms_output.put_line((dbms_utility.get_cpu_time-:n)||' cpu hsecs...' );
67 cpu hsecs...
PL/SQL procedure successfully completed.
Not only that, but you should notice that the code is getting more and more complex. From the sheer simplicity
of a single UPDATE statement, to procedural code, to even more complex procedural code—we are going in the wrong
direction! Furthermore (yes, there is more to complain about), the preceding procedural code is not done yet. It
doesn't deal with “what happens when we fail” (not if we but rather when we ). What happens if this code gets halfway
done and then the system fails? How do you restart the procedural code with a commit? You'd have to add yet more
code so you knew where to pick up and continue processing. With the single UPDATE statement, we just reissue the
UPDATE . We know that it will entirely succeed or entirely fail; there will not be partial work to worry about. We visit this
point more in the section “Restartable Processes Require Complex Logic.”
Now, just to supply a counterpoint to this discussion, recall in Chapter 7 when we discussed the concept of write
consistency and how an UPDATE statement, for example, could be made to restart. In the event that the preceding
UPDATE statement was to be performed against a subset of the rows (it had a WHERE clause, and other users were
modifying the columns this UPDATE was using in the WHERE clause), then there would be a case either for using a series
of smaller transactions rather than one large transaction or for locking the table prior to performing the mass update.
The goal here would be to reduce the opportunity for restarts to occur.
If we were to UPDATE the vast majority of the rows in the table, that would lead us toward using the LOCK TABLE
command. In my experience, however, these sorts of large mass updates or mass deletes (the only statement types
really that would be subject to the restart) are done in isolation. That large, one-time bulk update or the purge of old
data generally is not done during a period of high activity. Indeed, the purge of data should not be affected by this at
all, since you would typically use some date field to locate the information to purge, and other applications would
not modify this data.
Snapshot Too Old Error
Let's now look at the second reason developers are tempted to commit updates in a procedural loop, which arises from
their (misguided) attempts to use a “limited resource” (undo segments) sparingly. This is a configuration issue; you
need to ensure that you have enough undo space to size your transactions correctly. Committing in a loop, apart from
generally being slower, is also the most common cause of the dreaded ORA-01555 error. Let's look at this in more detail.
As you will appreciate after reading Chapters 1 and 7, Oracle's multiversioning model uses undo segment data
to reconstruct blocks as they appeared at the beginning of your statement or transaction (depending on the isolation
mode). If the necessary undo information no longer exists, you will receive an ORA-01555: snapshot too old error
message and your query will not complete. So, if you are modifying the table that you are reading (as in the previous
example), you are generating undo information required for your query. Your UPDATE generates undo information
that your query will probably be making use of to get the read-consistent view of the data it needs to update. If you
commit, you are allowing the system to reuse the undo segment space you just filled up. If it does reuse the undo,
wiping out old undo data that your query subsequently needs, you are in big trouble. Your SELECT will fail and your
UPDATE will stop partway through. You have a partly finished logical transaction and probably no good way to restart it
(more about this in a moment).
 
Search WWH ::




Custom Search