Database Reference
In-Depth Information
You can use this bit of information to further understand why using an AFTER FOR EACH ROW trigger is more
efficient than using a BEFORE FOR EACH ROW . The AFTER trigger won't have the same effect—we've already retrieved the
block in current mode by then.
Note
Which leads us to the “Why do we care?” question.
Why Is a Restart Important to Us?
The first thing that pops out should be “Our trigger fired twice!” We had a one-row table with a BEFORE FOR EACH ROW
trigger on it. We updated one row, yet the trigger fired two times.
Think of the potential implications of this. If you have a trigger that does anything nontransactional, this could
be a fairly serious issue. For example, consider a trigger that sends an update where the body of the e-mail is “This
is what the data used to look like. It has been modified to look like this now.” If you sent the e-mail directly from the
trigger, using UTL_SMTP in Oracle9 i or UTL_MAIL in Oracle 10 g and above, then the user would receive two e-mails, with
one of them reporting an update that never actually happened.
Anything you do in a trigger that is nontransactional will be impacted by a restart. Consider the following
implications:
Consider a trigger that maintains some PL/SQL global variables, such as for the number
of rows processed. When a statement that restarts rolls back, the modifications to PL/SQL
variables won't roll back.
UTL_ ( UTL_FILE , UTL_HTTP , UTL_SMTP , and so on) should
be considered susceptible to a statement restart. When the statement restarts, UTL_FILE won't
un-write to the file it was writing to.
Virtually any function that starts with
Any trigger that is part of an autonomous transaction must be suspect. When the statement
restarts and rolls back, the autonomous transaction can't be rolled back.
All of these consequences must be handled with care in the belief that they may be fired more than once per row
or be fired for a row that won't be updated by the statement after all.
The second reason you should care about potential restarts is performance related. We have been using a
single-row example, but what happens if you start a large batch update and it is restarted after processing the first
100,000 records? It will roll back the 100,000 row changes, restart in SELECT FOR UPDATE mode, and do the 100,000 row
changes again after that.
You might notice, after putting in that simple audit trail trigger (the one that reads the :NEW and :OLD values), that
performance is much worse than you can explain, even though nothing else has changed except the new triggers.
It could be that you are restarting queries you never used in the past. Or the addition of a tiny program that updates
just a single row here and there makes a batch process that used to run in an hour suddenly run in many hours due to
restarts that never used to take place.
This is not a new feature of Oracle—it has been in the database since version 4.0, when read consistency was
introduced. I myself was not totally aware of how it worked until the summer of 2003 and, after I discovered what it
implied, I was able to answer a lot of “How could that have happened?” questions from my own past. It has made me
swear off using autonomous transactions in triggers almost entirely, and it has made me rethink the way some of my
applications have been implemented. For example, I'll never send e-mail from a trigger directly; rather, I'll always use
DBMS_JOB or something similar to send the e-mail after my transaction commits. This makes the sending of the e-mail
transactional ; that is, if the statement that caused the trigger to fire and send the e-mail is restarted, the rollback it
performs will roll back the DBMS_JOB request. Most everything nontransactional that I did in triggers was modified to
be done in a job after the fact, making it all transactionally consistent.
 
 
Search WWH ::




Custom Search