Database Reference
In-Depth Information
The added column is a virtual column and as such incurs no storage overhead. The value is not computed and
stored on disk. Rather, it is computed upon retrieval of the data from the database.
This example showed how to implement optimistic locking with a hash or checksum. You should bear in mind
that computing a hash or checksum is a somewhat CPU-intensive operation; it is computationally expensive. On a
system where CPU bandwidth is a scarce resource, you must take this fact into consideration. However, this approach
is much more network-friendly because the transmission of a relatively small hash instead of a before-and-after image
of the row (to compare column by column) over the network will consume much less of that resource.
Optimistic or Pessimistic Locking?
So which method is best? In my experience, pessimistic locking works very well in Oracle (but perhaps not so well in
other databases) and has many advantages over optimistic locking. However, it requires a stateful connection to the
database, like a client/server connection. This is because locks are not held across connections. This single fact makes
pessimistic locking unrealistic in many cases today. In the past, with client/server applications and a couple dozen or
hundred users, it would have been my first and only choice. Today, however, optimistic concurrency control is what I
would recommend for most applications. Having a connection for the entire duration of a transaction is just too high
a price to pay.
Of the methods available, which do I use? I tend to use the version column approach with a timestamp column.
It gives me the extra update information in a long-term sense. Furthermore, it's less computationally expensive
than a hash or checksum, and it doesn't run into the issues potentially encountered with a hash or checksum when
processing LONG , LONG RAW , CLOB , BLOB , and other very large columns ( LONG and LONG RAW are obsolete, I only mention
them here because they're still used frequently in the Oracle data dictionary).
If I had to add optimistic concurrency controls to a table that was still being used with a pessimistic locking
scheme (e.g., the table was accessed in both client/server applications and over the Web), I would opt for the
ORA_HASH approach. The reason is that the existing legacy application might not appreciate a new column appearing.
Even if we took the additional step of hiding the extra column, the application might suffer from the overhead of
the necessary trigger. The ORA_HASH technique would be nonintrusive and lightweight in that respect. The hashing/
checksum approach can be very database independent, especially if we compute the hashes or checksums outside
of the database. However, by performing the computations in the middle tier rather than the database, we will incur
higher resource usage penalties in terms of CPU usage and network transfers.
Blocking
Blocking occurs when one session holds a lock on a resource that another session is requesting. As a result, the
requesting session will be blocked—it will hang until the holding session gives up the locked resource. In almost every
case, blocking is avoidable. In fact, if you do find that your session is blocked in an interactive application, then you
have probably been suffering from the lost update bug as well, perhaps without realizing it. That is, your application
logic is flawed and that is the cause of the blocking.
The five common DML statements that will block in the database are INSERT , UPDATE , DELETE , MERGE , and SELECT
FOR UPDATE . The solution to a blocked SELECT FOR UPDATE is trivial: simply add the NOWAIT clause and it will no
longer block. Instead, your application will report a message back to the end user that the row is already locked. The
interesting cases are the remaining four DML statements. We'll look at each of them and see why they should not
block and how to correct the situation if they do.
 
Search WWH ::




Custom Search