Database Reference
In-Depth Information
This locking method works in all environments, but it does increase the probability that a user performing an
update will lose. That is, when that user goes to update her row, she finds that the data has been modified, and she has
to start over.
One popular implementation of optimistic locking is to keep the old and new values in the application, and upon
updating the data, use an update like this:
Update table
Set column1 = :new_column1, column2 = :new_column2, ....
Where primary_key = :primary_key
And decode( column1, :old_column1, 1 ) = 1
And decode( column2, :old_column2, 1 ) = 1
...
Here, we are optimistic that the data doesn't get changed. In this case, if our update updates one row, we got
lucky; the data didn't change between the time we read it and the time we got around to submitting the update. If
we update zero rows, we lose; someone else changed the data and now we must figure out what we want to do to
continue in the application. Should we make the end user re-key the transaction after querying the new values for
the row (potentially causing the user frustration, as there is a chance the row will have changed yet again)? Should we
try to merge the values of the two updates by performing update conflict-resolution based on business rules
(lots of code)?
The preceding UPDATE will, in fact, avoid a lost update, but it does stand a chance of being blocked, hanging while
it waits for an UPDATE of that row by another session to complete. If all of your applications use optimistic locking,
then using a straight UPDATE is generally OK since rows are locked for a very short duration as updates are applied and
committed. However, if some of your applications use pessimistic locking, which will hold locks on rows for relatively
long periods of time, or if there is any application (such as a batch process) that might lock rows for a long period of
time (more than a second or two is considered long), then you should consider using a SELECT FOR UPDATE NOWAIT
instead to verify the row was not changed, and lock it immediately prior to the UPDATE to avoid getting blocked by
another session.
There are many methods of implementing optimistic concurrency control. We've discussed one whereby the
application will store all of the before images of the row in the application itself. In the following sections, we'll explore
two others, namely:
Using a special column that is maintained by a database trigger or application code to tell us
the “version” of the record.
Using a checksum or hash that was computed using the original data.
Optimistic Locking Using a Version Column
This is a simple implementation that involves adding a single column to each database table you wish to protect from
lost updates. This column is generally either a NUMBER or DATE/TIMESTAMP column. It is typically maintained via a
row trigger on the table, which is responsible for incrementing the NUMBER column or updating the DATE/TIMESTAMP
column every time a row is modified.
i said it was typically maintained via a row trigger. i did not, however, say that was the best way or right way to
maintain it. i would personally prefer this column be maintained by the UPDATE statement itself, not via a trigger
because triggers that are not absolutely necessary (as this one is) should be avoided. For background on why i avoid
triggers, refer to my “trouble With triggers” article from Oracle Magazine , found on the oracle technology network at
http://www.oracle.com/technetwork/issue-archive/2008/08-sep/o58asktom-101055.html .
Note
 
 
Search WWH ::




Custom Search