Databases Reference
In-Depth Information
A write-write conflict. Suppose we have T
1
writing to o and T
2
writ-
ing to o, thus making T
1
and T
2
conflict. If we have placed write
locks in T
1
for object o at j
1
sites, then the number j
2
of write locks in
transaction T
2
for o must be at least n
·
1. Because the situation
is reflexive (the same argument could have been made swapping T
1
and T
2
), we can determine that j
1
=
-
j
1
+
j
2
and thus say that the number
of write locks j placed by any transaction is given by j
³é(
n
+
1)/2
ù
.
Often we can identify that in a particular system the number of read
operations greatly outnumbers the number of write operations. Therefore,
we want to minimize the cost of making a read operation, at the expense of
greater cost in making a write operation. That can be achieved by using a
write-locks-all scheme, where any write operation locks all sites (i.e., j
=
n).
Therefore, from the above rules, we have n
1, that is, only
one read lock is needed, because every site will already have the conflicting
write lock present.
³
n
-
k
+
1
®
k
³
9.5.1.3 Deadlocks
For fragmented data, a deadlock may occur between transactions executing
at separate sites, each running its own LTM. For example, consider the fol-
lowing concurrent execution of transactions T
1
and T
2
from Figure 9.6 that
(when using strict 2PL) has reached a deadlocked state:
r
1
(a
987
), w
1
(a
987
), r
2
(a
100
), r
2
(a
123
), r
2
(a
130
), r
2
(a
400
), r
2
(a
563
), r
2
(a
888
), r
1
(a
123
)
T
1
is unable to proceed because its next operation w
1
(a
123
) is blocked waiting
for T
2
to release the lock obtained by r
2
(a
123
), and T
2
is unable to proceed
because its next operation r
2
(a
987
) is blocked waiting for T
1
to release the lock
obtained by w
1
(a
987
). In a single-server system, this results in a waits-for
graph that contains a cycle, indicating the deadlock, and either T
1
or T
2
would be rolled back. In a DDB, if the account relation is fragmented as
shown in Figure 9.3(b), then locks on a
987
will be held at S
2
, and locks on a
123
at S
1
, leading to a requirement that the waits-for graph is maintained on a
global basis.
One simple way to achieve that is to use the GTM to hold the graph,
but that leads to delays in transaction execution because a remote copy of the
waits-for graph needs to be constantly updated during execution. An alterna-
tive mechanism is to store local waits-for graphs, with any transaction outside
the local server being marked as an EXT node. Once a cycle is detected at the