Information Technology Reference
In-Depth Information
Picking a protocol type has historically been focused on the following criteria:
vSphere Feature Support Although major VMware features such as vSphere HA and
vMotion initially required VMFS, they are now supported on all storage types, including
raw device mappings (RDMs) and NFS datastores. vSphere feature support is generally not
a protocol-selection criterion, and there are only a few features that lag on RDMs and NFS,
such as native vSphere snapshots on physical compatibility mode RDMs or the ability to cre-
ate RDMs on NFS.
Storage Capacity Efi ciency Thin provisioning behavior at the vSphere layer, universally
and properly applied, drives a very high efi ciency, regardless of protocol choice. Applying
thin provisioning at the storage array (on both block and NFS objects) delivers a higher
overall efi ciency than applying it only at the virtualization layer. Emerging techniques for
extra array capacity efi ciency (such as detecting and reducing storage consumed when there
is information in common using compression and data deduplication) are currently most
effectively used on NFS datastores but are expanding to include block use cases. One com-
mon error is to look at storage capacity (GB) as the sole vector of efi ciency—in many cases,
the performance envelope requires a i xed number of spindles even with advanced caching
techniques. Often in these cases, efi ciency is measured in spindle density, not in GB. For
most vSphere customers, efi ciency tends to be a function of operational process rather than
protocol or platform choice.
Performance Many vSphere customers see similar performance regardless of a given pro-
tocol choice. Properly designed iSCSI and NFS over Gigabit Ethernet can support very large
VMware deployments, particularly with small-block (4 KB-64 KB) I/O patterns that charac-
terize most general Windows workloads and don't need more than roughly 80 MBps of 100
percent read or write I/O bandwidth or 160 MBps of mixed I/O bandwidth. This difference in
the throughput limit is due to the 1 Gbps/2 Gbps bidirectional nature of 1GbE—pure read or
pure write workloads are unidirectional, but mixed workloads are bidirectional.
Fibre Channel (and by extension, FCoE) generally delivers a better performance envelope
with very large-block I/O (VMs supporting DSS database workloads or SharePoint), which
tends to demand a high degree of throughput. Less important generally but still important
for some workloads, Fibre Channel delivers a lower-latency model and also tends to have
a faster failover behavior because iSCSI and NFS always depend on some degree of TCP
retransmission for loss and, in some iSCSI cases, ARP—all of which drive failover handling
into tens of seconds versus seconds with Fibre Channel or FCoE. Load balancing and scale-
out with IP storage using multiple Gigabit Ethernet links with IP storage can work for iSCSI
to drive up throughput. Link aggregation techniques can help, but they work only when
you have many TCP sessions. Because the NFS client in vSphere uses a single TCP session
for data transmission, link aggregation won't improve the throughput of individual NFS
datastores. Broad availability of 10 Gb Ethernet brings higher-throughput options to NFS
datastores.
You can make every protocol coni guration work in almost all use cases; the key is in the
details (covered in this chapter). In practice, the most important thing is what you know and feel
comfortable with.
The most l exible vSphere coni gurations tend to use a combination of both VMFS (which
requires block storage) and NFS datastores (which require NAS), as well as RDMs on a selective
basis (block storage).
Search WWH ::




Custom Search