Database Reference
In-Depth Information
RAID 4: Implements block-level striping with a dedicated parity disk, which
achieves the same fault resilience as level 3, but matches the transfer unit
sizes to the native sector-size for the device. This organization greatly
improves performance by ensuring that the entire disk block is utilized
on each read. This also greatly improves transfer performance for small
transactions, in comparison to the lower levels of RAID.
RAID 5: Uses block-level striping like RAID 4, but distributes the parity
across the disks that comprise the RAID set. When a disk fails, the
RAID 5 can continue to operate at full performance. Some RAID sys-
tems allow the failed disk to be replaced and rebuilt while the unit
is running. However, the rebuild process is typically very costly and
can greatly reduce the performance of the array while the rebuild is in
progress. A second failure while the RAID 5 is in degraded state will
render the volume unusable, thereby motivating more robust approaches
to encoding the parity information used to detect disk failures.
RAID 6: RAID level 6 was not part of the original RAID configurations,
but is now commonly considered a standard RAID configuration. RAID
6 introduces an addtional parity block to handle the increased failure
rates that are anticipated with extremely large disk configurations. In
contrast, RAID 5 uses the simplest case of Reed-Solomon ECCs, which
enables it to handle loss of a single disk. However, technology trends and
large data centers have increased the probability of seeing multiple disk
failures within the same RAID block. RAID 6 extends the Reed-Solomon
error correction field so that it can accommodate multiple simultaneous
disk failures. The extended error correction enables RAID 6 to con-
tinue to operate in the presence of more than one simultaneous disk
failure.
RAID can be implemented in either hardware or software. However, the
parity checking and automatic volume rebuild process for RAID 2 and higher
typically benefit from dedicated hardware. This has led to a robust hardware
controller market for RAID systems. Given that RAID 0 and RAID 1 have
less-demanding requirements for fault detection and correction, many operat-
ing systems incorporate logical volume managers that support concatenation
of volumes, mirroring, or striping of volumes in software.
As large deployments are becoming more common, hierarchical implemen-
tations of RAID technology have become more common. The terminology
for this hierarchical structure is RAID M
N, or “RAID MN,” where N
is the baseline building block that is composed together in RAID M fash-
ion. So, for instance, a RAID 05 (also known as RAID 0
+
5) is a RAID 5
array comprised of a number of striped RAID 0 arrays, whereas a RAID 50
is a RAID 0 array striped across RAID 5 elements. RAID 50 and RAID 60
arrangements are typically the most commonly employed hierarchical RAID
implementations.
+
Search WWH ::




Custom Search