Information Technology Reference
In-Depth Information
at the time of a system crash or process failure will be correctly rolled back.
Naturally, we assume that the locking protocol used prevents dirty writes. During
normal processing, in the absence of system crashes and process failures, it follows
immediately from Theorem 5.24 that any active, not-precommitted transaction can
do (or complete) a partial or total rollback, because then, for the purpose of the
proof, we can assume that each C action in the history means true commit rather
than precommit.
Assume then that a transaction is active or precommitted, but not committed at
the time of a system crash. Such a transaction does not have its commit log record
on the log disk. It may have or may not have had its commit log record in the log
buffer at the time of the failure. In any case, the application has not been notified of
the commit of the transaction. Thus it is correct to roll back such a transaction. The
sequence of log records found on the log disk at the time of restart recovery is one
that can also be produced in the case that transactions are committed individually
rather than in groups, up to the last log record flushed in the last group commit.
Thus, the transaction is correctly rolled back in the undo pass of ARIES recovery.
We have yet to consider what must be done in the event of a failure of a single
server-process thread that executes a transaction. Actually, nothing special needs to
be done besides what is explained in Sect. 4.10 . If the failed server-process thread
was waiting for the next group commit, it does not hold any page latched; so the
failure cannot have left any page corrupted. Thus nothing whatsoever needs to be
done. The precommitted transaction either will be committed in the next group
commit or aborted and rolled back if a system crash occurs before the commit log
record goes to disk. In the former case the application is just left missing of the
commit notification.
15.2
Online Page Relocation
In write-intensive transaction-processing environments where tuples are frequently
inserted, updated, or deleted, the buffer soon fills up with modified pages, so that
many pages must be flushed onto disk during normal processing and when taking
checkpoints. Performance of flushing can be improved by using large sequential
disk writes in which a sequence of pages is taken to a number of consecutive disk
blocks by a single operation.
A large disk write necessarily relocates the flushed pages to newly allocated
consecutive disk addresses, implying that the flushed pages must change their page
identifiers accordingly. Page relocation naturally occurs in a reorganization of a
database structure, with the overhead of adjusting pointers (tuple identifiers) in
indexes pointing to tuples in relocated data pages. In offline reorganization of a
sparse primary index, the secondary indexes are usually dropped and then rebuilt
on the relocated data, because the index records in a secondary index usually carry
tuple identifiers besides the primary keys. When done online during normal inserts,
updates, and deletes on an indexed relation, page relocation also introduces the
Search WWH ::




Custom Search