Information Technology Reference
In-Depth Information
Thinking about this further and trying to be more efi cient, you determine that while the
virtual disks will be coni gured to be 50 GB, on average they will need only 20 TB, and the
rest will be empty, so you can use thin provisioning at the vSphere or storage array layer.
Using this would reduce the requirement to 3 TB, and you decide that with good use of
vSphere managed datastore objects and alerts, you can cut the extra space down from 25
percent to 20 percent. This reduces the requirement down to 3.6 TB.
Also, depending on your array, you may be able to deduplicate the storage itself, which has
a high degree of commonality. Assuming a conservative 2:1 deduplication ratio, you would
then need only 1.5 TB of capacity—and with an additional 20 percent for various things,
that's 1.8 TB.
With only 1.8 TB needed, you could i t that on a very small 3+1 RAID 5 using 750 GB drives,
which would net 2.25 TB.
This would be much cheaper, right? Much more efi cient, right? After all, we've gone from 13
1 TB spindles to 4 750 GB spindles.
It's not that simple. This will be clear going through this a second time, but this time work
through the same design with a performance-centered planning dynamic:
You determine you will have 150 VMs (the same as before).
You look at their workloads, and although they spike at 200 IOPS, they average at 50 IOPS,
and the duty cycle across all the VMs doesn't seem to spike at the same time, so you decide
to use the average.
You look at the throughput requirements and see that although they spike at 200 MBps
during a backup, for the most part, they drive only 3 MBps. (For perspective, copying a i le
to a USB 2 memory stick can drive 12 MBps—so this is a small amount of bandwidth for a
server.) The I/O size is generally small—in the 4 KB size.
Among the 150 virtual purpose machines, while most are general-purpose servers, there
are 10 that are “big hosts” (for example, Exchange servers and some SharePoint backend
SQL Server machines) that require specii c planning, so you put them aside to design sepa-
rately using the reference architecture approach. The remaining 140 VMs can be character-
ized as needing an average of 7,000 IOPS (140 × 50 IOPS) and 420 MBps of average through-
put (140 × 3 MBps).
Assuming no RAID losses or cache gains, 7,000 IOPS translates to the following:
39 15K RPM Fibre Channel/SAS drives (7,000 IOPS/180 IOPS per drive)
59 10K RPM Fibre Channel/SAS drives (7,000 IOPS/120 IOPS per drive)
87 5,400 RPM SATA drives (7,000 IOPS/80 IOPS per drive)
7 enterprise l ash drives (7,000 IOPS/1000 IOPS per drive)
Assuming no RAID losses or cache gains, 420 MBps translates into 3,360 Mbps. At the
array and the ESXi hosts layers, this will require the following:
Two 4 Gbps Fibre Channel array ports (although it could i t on one, you need two for
high availability).
Two 10 GbE ports (though it could i t on one, you need two for high availability).
Search WWH ::




Custom Search