Hardware Reference
In-Depth Information
01, except with somewhat increased reliability because more combinations of multiple drive
failures can be tolerated, and rebuilding an array after a failed drive is replaced is much faster
and more efficient.
Additional custom or proprietary RAID levels exist that were not originally supported by the RAID
Advisory Board. For example, from 1993 through 2004, “RAID 7” was a trademarked marketing term
used to describe a proprietary RAID implementation released by the (now defunct) Storage Computer
Corp.
When set up for maximum performance, arrays typically run RAID Level 0, which incorporates data
striping. Unfortunately, RAID 0 also sacrifices reliability such that if any one drive fails, all data in
the array is lost. The advantage is in extreme performance. With RAID 0, performance generally
scales up with the number of drives you add in the array. For example, with four drives you won't
necessarily have four times the performance of a single drive, but many controllers can come close to
that for sustained transfers. Some overhead is still involved in the controller performing the striping,
and issues still exist with latency—that is, how long it takes to find the data—but performance will be
higher than any single drive can normally achieve.
When set up for reliability, arrays generally run RAID Level 1, which is simple drive mirroring. All
data written to one drive is written to the other. If one drive fails, the system can continue to work on
the other drive. Unfortunately, this does not increase performance, and it also means you get to use
only half of the available drive capacity. In other words, you must install two drives, but you get to
use only one. (The other is the mirror.) However, in an era of high capacities and low drive prices,
this is not a significant issue.
Combining performance with fault tolerance requires using one of the other RAID levels, such as
RAID 5 or 10. For example, virtually all professional RAID controllers used in network file servers
are designed to use RAID Level 5. Controllers that implement RAID Level 5 used to be very
expensive, and RAID 5 requires at least three drives to be connected, whereas RAID 10 requires four
drives.
With four 500GB drives in a RAID 5 configuration, you would have 1.5TB of total storage, and you
could withstand the failure of any single drive. After a drive failure, data could still be read from and
written to the array. However, read/write performance would be exceptionally slow, and it would
remain so until the drive was replaced and the array was rebuilt. The rebuild process could take a
relatively long time, so if another drive failed before the rebuild completed, all data would be lost.
With four drives in a RAID 10 configuration, you would have only 1TB of total storage. However,
you could withstand many cases of multiple drive failures. In addition, after a drive failure, data
could still be read from and written to the array at full speed, with no noticeable loss in performance.
In addition, once the failed drive is replaced, the rebuild process would go relatively quickly as
compared to rebuilding a RAID 5 array. Because of the advantages of RAID 10, many are
recommending it as an alternative to RAID 5 where maximum redundancy and performance are
required.
Many motherboards include SATA RAID capability as a built-in feature. For those that don't, or
where a higher performance or more capable SATA RAID solution is desired, you can install a
SATA RAID host adapter in a PCIe slot in the system. A typical PCIe SATA RAID controller
enables up to four, six, or eight drives to be attached, and you can run them in RAID Level 0, 1, 5, or
10 mode. Most PCIe SATA RAID cards use a separate SATA data channel (cable) for each drive,
 
Search WWH ::




Custom Search