Information Technology Reference
In-Depth Information
For large applications (Exchange, SQL Server, SharePoint, Oracle, MySQL, and so on), the
sizing, layout, and best practices for storage for large database workloads are not dissimilar
to physical deployments and can be a good choice for RDMs or VMFS volumes with no
other virtual disks. Also, leverage joint-reference architectures available from VMware
and the storage vendors.
Remember that the datastore will need to have enough IOPS and capacity for the total of
all the VMs. Just remember 80 to 180 IOPS per spindle, depending on spindle type (refer
to the Disks item in the list of elements that make up a shared storage array in the sec-
tion “Dei ning Common Storage Array Architectures” earlier in this chapter), to support
the aggregate of all the VMs in it. If you just add up all the aggregate IOPS needed by the
sum of the VMs that will be in a datastore, you have a good approximation of the total.
Additional I/Os are generated by the zeroing activity that occurs for thin and l at (but not
thick, which is pre-zeroed up front), but this tends to be negligible. You lose some IOPS
because of the RAID protection, but you know you're in the ballpark if the number of spin-
dles supporting the datastore (via a i le system and NFS or a LUN and VMFS) times the
number of IOPS per spindle is more than the total number of IOPS needed for the aggre-
gate workload. Keep your storage vendor honest and you'll have a much more successful
virtualization project!
Cache benei ts are difi cult to predict; they vary a great deal. If you can't do a test, assume
they will have a large effect in terms of improving VM boot times with RDBMS environ-
ments on VMware but almost no effect otherwise, so plan your spindle count cautiously.
When thinking about capacity
Consider not only the VM disks in the datastores but also their snapshots, their swap, and
their suspended state and memory. A good rule of thumb is to assume 25 percent more
than from the virtual disks alone. If you use thin provisioning at the array level, oversizing
the datastore has no downside because only what is necessary is actually used.
There is no exact best practice datastore-sizing model. Historically, people have recom-
mended one i xed size or another. A simple model is to select a standard guideline for
the number of VMs you feel comfortable with in a datastore, multiply that number by the
average size of the virtual disks of each VM, add the overall 25 percent extra space, and use
that as a standardized building block. Remember, VMFS and NFS datastores don't have an
effective limit on the number of VMs—with VMFS you need to consider disk queuing and,
to a much lesser extent, SCSI reservations; with NFS you need to consider the bandwidth to
a single datastore.
Be l exible and efi cient. Use thin provisioning at the array level if possible, and if your
array doesn't support it, use it at the VMware layer. It never hurts (so long as you moni-
tor), but don't count on it resulting in needing fewer spindles (because of performance
requirements).
If your array doesn't support thin provisioning but does support extending LUNs, use thin
provisioning at the vSphere layer, but start with smaller VMFS volumes to avoid oversizing
and being inefi cient.
In general, don't oversize. Every modern array can add capacity dynamically, and you can
use Storage vMotion to redistribute workloads. Use the new managed datastore function to
Search WWH ::




Custom Search