Databases Reference
In-Depth Information
2.3 Varying Strategies
In this example, the catch-up operation is invoked only when the size of the
preapplied log exceeds a threshold. We can modify the strategy by changing the
rules. For example, we can use load skew information to trigger the catch-up
operation, or make the backup and log disks wait until their workload is low.
We can also vary the strategies for fault tolerance, but only by changing the
rule definitions. The log can also be duplicated to improve the reliability of the
system. Rule_4 can be changed to treat double logs as follows:
Rule_4':
when insert(HostID, StreamID, Stream);
if D=traverse_directory(StreamID)).disk == Own
and D.type == double_log;
then L1=mapping(Own, log1),
L2=mapping(Own, log2),
lock(StreamID),
send(L1.disk, put_log(D, insert(HostID, StreamID,
Stream))),
send(L2.disk, put_log(D, insert(HostID, StreamID,
Stream))),
insert_local(D.location, StreamID, Stream),
unlock(StreamID),
send(HostID, true);
else send(D.disk, insert(HostID, StreamID, Stream)).
In this example, the update logs are sent and written to the two disks indicated by
the mapping information. We can also easily change the number of backup disks:
Rule_5':
when put_log(D, CMD);
if count(log, D) > Threshold
and D.type == double_backup;
then insert_local(log(D, CMD)),
B1=mapping(D, backup1),
catch_up(D, B1),
B2=mapping(D, backup2),
catch_up(D, B2);
else insert_local(log(D, CMD)).
The increase of replication is effective for concurrent retrieval operations. The
replication of a stream is transparent to the hosts, except for the administrator
host. The replication information is stored as a property of the stream in the
distributed directory.
Fragmentation of a stream is also transparent to the hosts. The size of the
stream, or the access patterns for the stream, are used as criteria for fragmentation.
The decomposition operation of the stream is also treated by rules. These strategies
Search WWH ::




Custom Search