Information Technology Reference
In-Depth Information
Figure 6.5
A R AID 6 4+2 con-
fi guration off ers
protection against
double drive
failures.
Data: 1011
Write: 1
Write: 0
Write: 1
Write: 1
Write: Parity1 Write: Parity2
While this is a reasonably detailed discussion of RAID levels, what you should take from it is
that you shouldn't worry about it too much. Just don't use RAID 0 unless you have a proper use
case for it. Use hot spare drives and follow the vendor best practices on hot spare density. EMC,
for example, generally recommends one hot spare for every 30 drives in its arrays, whereas
Compellent recommends one hot spare per drive type and per drive shelf. Just be sure to check
with your storage vendor for their specii c recommendations.
For most vSphere implementations, RAID 5 is a good balance of capacity efi ciency, perfor-
mance, and availability. Use RAID 6 if you have to use large SATA RAID groups or don't have
proactive hot spares. RAID 10 schemes still make sense when you need signii cant write perfor-
mance. Remember that for your vSphere environment it doesn't all have to be one RAID type; in
fact, mixing different RAID types can be very useful to deliver different tiers of performance/
availability.
For example, you can use most datastores with RAID 5 as the default LUN coni guration,
sparingly use RAID 10 schemes where needed, and use storage-based policy management,
which we'll discuss later in this chapter, to ensure that the VMs are located on the storage that
suits their requirements.
You should dei nitely make sure that you have enough spindles in the RAID group to meet
the aggregate workload of the LUNs you create in that RAID group. The RAID type will affect
the ability of the RAID group to support the workload, so keep RAID overhead (like the RAID 5
write penalty) in mind. Fortunately, some storage arrays can nondisruptively add spindles to a
RAID group to add performance as needed, so if you i nd that you need more performance, you
can correct it. Storage vMotion can also help you manually balance workloads.
Now let's take a closer look at some specii c storage array design architectures that will
impact your vSphere storage environment.
Understanding VSAN
vSphere 5.5 introduces a brand-new storage feature, virtual SAN, or simply VSAN. At a high
level, VSAN pools the locally attached storage from members of a VSAN-enabled cluster and
presents the aggregated pool back to all hosts within the cluster. This could be considered an
“array” of sorts because just like a normal SAN, it has multiple disks presented to multiple hosts,
but we would take it one step further and consider it an “internal array.” While VMware has
announced VSAN as a new feature in vSphere 5.5, there are a few caveats. During the i rst few
months of its availability it will be considered “beta only” and therefore not for production use.
Also note that VSAN is licensed separately from vSphere itself.
As we mentioned earlier, in the section “Comparing Local Storage with Shared Storage,”
VSAN does not require any additional software installations. It is built directly into ESXi.
Managed from vCenter Server, VSAN is compatible with all the other cluster features that
vSphere offers, such as vMotion, HA, and DRS. You can even use Storage DRS to migrate VMs
on or off a VSAN datastore.
 
 
 
 
 
 
 
Search WWH ::




Custom Search