Database Reference
In-Depth Information
8 begin
9 l_lock_id :=
10 dbms_utility.get_hash_value( to_char( :new.x ), 0, 1024 );
11 if ( dbms_lock.request
12 ( id => l_lock_id,
13 lockmode => dbms_lock.x_mode,
14 timeout => 0,
15 release_on_commit => TRUE ) not in (0,4) )
16 then
17 raise resource_busy;
18 end if;
19 end;
20 /
Trigger created.
SCOTT@ORA12CR1> insert into demo(x) values (1);
1 row created.
Now, to demonstrate us catching this blocking INSERT problem in a single session, we'll use an AUTONOMOUS_
TRANSACTION so that it seems as if this next block of code was executed in another SQL*Plus session. In fact, if you use
another session, the behavior will be the same. Here we go:
SCOTT@ORA12CR1> declare
2 pragma autonomous_transaction;
3 begin
4 insert into demo(x) values (1);
5 commit;
6 end;
7 /
declare
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
ORA-06512: at "SCOTT.DEMO_BIFER", line 14
ORA-04088: error during execution of trigger 'SCOTT.DEMO_BIFER'
ORA-06512: at line 4
The concept here is to take the supplied primary key value of the table protected by the trigger and put it in a
character string. We can then use DBMS_UTILITY.GET_HASH_VALUE to come up with a mostly unique hash value for the
string. As long as we use a hash table smaller than 1,073,741,823, we can lock that value exclusively using DBMS_LOCK .
After hashing, we take that value and use DBMS_LOCK to request that lock ID to be exclusively locked with a
timeout of ZERO (this returns immediately if someone else has locked that value). If we timeout or fail for any reason,
we raise ORA-00054 Resource Busy . Otherwise, we do nothing—it is OK to insert, we won't block. Upon committing
our transaction, all locks, including those allocated by this DBMS_LOCK call, will be released.
Of course, if the primary key of your table is an INTEGER and you don't expect the key to go over 1 billion, you can
skip the hash and just use the number as the lock ID.
You'll need to play with the size of the hash table (1,024 in this example) to avoid artificial resource busy messages
due to different strings hashing to the same number. The size of the hash table will be application (data)-specific, and
it will be influenced by the number of concurrent insertions as well. You might also add a flag to the trigger to allow
people to turn the check on and off. If I were going to insert hundreds or thousands of records, for example, I might
not want this check enabled.
Search WWH ::




Custom Search