Database Reference
In-Depth Information
Site A
Site B
T1
T3
T2
T4
T2
T3
Figure 18-19
Example of local wait-for graph.
Site A
Site B
T2
T4
T1
T3
Figure 18-20
Example of global wait-for graph.
Study the two local wait-for graphs carefully. Observe that transactions T2 and
T3 appear in both wait-for graphs. That means that these two concurrent transac-
tions wait for database objects at both sites. You note each graph by itself it does
not show a cycle, and, therefore, indicates that no deadlock has occurred. But what
happens when you combine the two graphs and look at the union of the two? Figure
18-20 presents the global wait-for graphs combining the two individual graphs.
The global wait-for graph readily discloses a cycle in the graph and, therefore, a
deadlock condition. But the DDBMS could discern and detect a deadlock only
when the local wait-for graphs are combined. How could the wait-for graphs be
organized and maintained to detect every deadlock without fail? Let us explore
two major schemes for organizing wait-for graphs in a distributed database
environment.
Centralized Scheme At specified short intervals, transmit all local wait-for
graphs to a centralized site, nominated as the deadlock coordinator. A global wait-
for graph is generated at the coordinator site by taking the union of all local wait-
for graphs. If the global wait-for graph discloses any cycles, the DDBMS detects a
deadlock and takes action. When the DDBMS detects a deadlock and decides on a
victim transaction to be aborted, the coordinator informs all concerned sites about
who the victim is so that all sites may abort the transaction and roll back any data-
base changes.
Figure 18-20 is an example of a global wait-for graph generated at a deadlock
coordinator site.
Occasionally, an additional transaction may be aborted unnecessarily after a
deadlock has been detected and a victim chosen. For example, in Figure 18-20
suppose that the DBMS at site A has decided to abort T2 for some unrelated reason
Search WWH ::




Custom Search