Information Technology Reference
In-Depth Information
Four 1 GbE ports for iSCSI or NFS. NFS will require careful multi-datastore planning
to hit the throughput goal because of how it works in link aggregation coni gurations.
iSCSI will require careful multipathing coni guration to hit the throughput goal.
If using block devices, you'll need to distribute VMs across datastores to design the data-
stores and backing LUNs themselves to ensure that they can support the IOPS of the VMs
they contain so the queues don't overl ow.
It's immediately apparent that the SATA drives are not ideal in this case (they would
require 87 spindles!). Using 300 GB 15K RPM drives (without using enterprise l ash drives),
at a minimum you will have 11.7 TB of raw capacity, assuming 10 percent RAID 6 capacity
loss (10.6 TB usable). This is more than enough to store the thickly provisioned VMs, not to
mention their thinly provisioned and then deduplicated variations.
Will thin provisioning and deduplication techniques save capacity? Yes. Could you use
that saved capacity? Maybe, but probably not. Remember, we've sized the coni guration
to meet the IOPS workload—unless the workload is lighter than we measured or the
additional workloads you would like to load on those spindles generate no I/O during
the periods the VMs need it. The spindles will all be busy servicing the existing VMs, and
additional workloads will increase the I/O service time.
What's the moral of the story? That thin provisioning and data deduplication have no useful-
ness? That performance is all that matters?
No. The moral of the story is that to be efi cient you need to think about efi ciency in mul-
tiple dimensions: performance, capacity, power, operational simplicity, and l exibility. Here is a
simple i ve-step sequence you can use to guide the process:
1. Look at your workload, and examine the IOPS, MBps, and latency requirements.
2. Put the outliers to one side, and plan for the average.
3. Use reference architectures and a focused plan to design a virtualized coni guration for
the outlier heavy workloads.
4. Plan i rst on the most efi cient way to meet the aggregate performance workloads.
5. Then, by using the performance coni guration developed in step 4, back into the most
efi cient capacity coni guration to hit that mark. Some workloads are performance bound
(ergo, step 4 is the constraint), and some are capacity bound (ergo, step 5 is the constraint).
Let's quantify all this learning into applicable best practices:
When thinking about performance
Do a little engineering by simple planning or estimation. Measure sample hosts, or use
VMware Capacity Planner to proi le the IOPS and bandwidth workload of each host that
will be virtualized onto the infrastructure. If you can't measure, at least estimate. For vir-
tual desktops, estimate between 5 and 20 IOPS. For light servers, estimate 50 to 100 IOPS.
Usually, most coni gurations are IOPS bound, not throughput bound, but if you can, mea-
sure the average I/O size of the hosts (or again, use Capacity Planner). Although estimation
can work for light server use cases, for heavy servers, don't ever estimate—measure them.
It's so easy to measure, it's absolutely a “measure twice, cut once” case, particularly for
VMs you know will have a heavy workload.
Search WWH ::




Custom Search