Information Technology Reference
In-Depth Information
Supporting Large h roughput (IOPS) Workloads on NFS
High-throughput (IOPS) workloads are usually gated by the backend coni guration (as true of
NAS devices as it is with block devices) and not the protocol or transport since they are also gen-
erally low bandwidth (MBps). By backend , we mean the array target. If the workload is cached,
then it's determined by the cache response, which is almost always astronomical. However, in
the real world, most often the performance is not determined by cache response; the perfor-
mance is determined by the spindle coni guration that supports the storage object. In the case of
NFS datastores, the storage object is the i le system, so the considerations that apply at the ESXi
host for VMFS (disk coni guration and interface queues) apply within the NFS server. Because
the internal architecture of an NFS server varies so greatly from vendor to vendor, it's almost
impossible to provide recommendations, but here are a few examples. On a NetApp FAS array,
the IOPS achieved is primarily determined by the FlexVol/aggregate/RAID group coni gura-
tion. On an EMC VNX array, it is likewise primarily determined by the Automated Volume
Manager/dVol/RAID group coni guration. Although there are other considerations (at a certain
point, the scale of the interfaces on the array and the host's ability to generate I/Os become
limited, but up to the limits that users commonly encounter), performance is far more often
constrained by the backend disk coni guration that supports the i le system. Make sure your i le
system has sufi cient backend spindles in the container to deliver performance for all the VMs
that will be contained in the i le system exported via NFS.
With these NFS storage design considerations in mind, let's move forward with creating and
mounting an NFS datastore.
There's Always an Exception to the Rule
h us far, we've been talking about how NFS always uses only a single link, and how you always
need to use multiple VMkernel ports and multiple NFS exports in order to utilize multiple links.
Normally, vSphere requires that you mount an NFS datastore using the same IP address or hostname
and path on all hosts. vSphere 5.0 added the ability to use a DNS hostname that resolves to multiple
IP addresses. However, each vSphere host will resolve the DNS name only once. h is means that
it will resolve to only a single IP address and will continue to use only a single link. In this case,
there is no exception to the rule. However, this confi guration can provide some rudimentary load
balancing for multiple hosts accessing a datastore via NFS over multiple links.
Creating and Mounting an NFS Datastore
In this procedure, we will show you how to create and mount an NFS datastore in vSphere. The
term create here is a bit of a misnomer; the i le system is actually created on the NFS server and
just exported. That process we can't really show you, because the procedures vary so greatly
from vendor to vendor. What works for one vendor to create an NFS datastore is likely to be dif-
ferent for another vendor.
Before you start, ensure that you've completed the following steps:
1. You created at least one VMkernel port for NFS trafi c. If you intend to use multiple
VMkernel ports for NFS trafi c, ensure that you coni gure your vSwitches and physical
switches appropriately, as described in “Crafting a Highly Available NFS Design.”
2. You con i gured your ESXi host for NFS storage according to the vendor's best practices,
including time-out values and any other settings. At the time of this writing, many
Search WWH ::




Custom Search