Information Technology Reference
In-Depth Information
h ere are only two exceptions to the “always thin provision at the array level if you can” guideline.
h e fi rst is in the most extreme performance use cases, because the thin-provisioning architectures
generally have a performance impact (usually marginal—and this varies from array to array) com-
pared to a traditional thick-storage confi guration. h e second is large, high-performance RDBMS
storage objects when the amount of array cache is signifi cantly smaller than the database; ergo,
the actual backend spindles are tightly coupled to the host I/O. h ese database structures have
internal logic that generally expects I/O locality, which is a fancy way of saying that they struc-
ture data expecting the on-disk structure to refl ect their internal structure. With very large array
caches, the host and the backend spindles with RDBMS-type workloads can be decoupled, and this
consideration is irrelevant. h ese two cases are important but rare. “Always thin provision at the
array level if you can” is a good general guiding principle.
In the last section of this chapter, we'll pull together everything you've learned in the previ-
ous sections and summarize with some recommended practices.
Leveraging SAN and NAS Best Practices
After all the discussion of coni guring and managing storage in vSphere environments, these
are the core principles:
Pick a storage architecture for your immediate and midterm scaling goals. Don't design
for extreme growth scenarios. You can always use Storage vMotion to migrate up to larger
arrays.
Consider using VMFS and NFS together; the combination provides a great deal of
l exibility.
When sizing your initial array design for your entire vSphere environment, think about
availability, performance (IOPS, MBps, latency), and then capacity—always together and
generally in that order.
The last point in the previous list cannot be overstated. People who are new to storage tend
to think primarily in the dimension of storage capacity (TB) and neglect availability and per-
formance. Capacity is generally not the limit for a proper storage coni guration. With modern
large-capacity disks (300 GB+ per disk is common) and capacity reduction techniques such as
thin provisioning, deduplication, and compression, you can i t a lot on a very small number of
disks. Therefore, capacity is not always the driver of efi ciency.
To make this clear, an example scenario will help. First, let's work through the capacity-
centered planning dynamic:
You determine you will have 150 VMs that are each 50 GB in size.
This means that at a minimum, if you don't apply any special techniques, you will need
7.5 TB (150 × 50 GB). Because of extra space for vSphere snapshots and VM swap, you
assume 25 percent overhead, so you plan 10 TB of storage for your vSphere environment.
With 10 TB, you could i t that on approximately 13 large 1 TB SATA drives (assuming a 10+2
RAID 6 and one hot spare).
 
Search WWH ::




Custom Search