Information Technology Reference
In-Depth Information
Figure 6.16
h e topology of an
NFS confi guration
is similar to iSCSI
from a connectiv-
ity standpoint but
very diff erent from
a confi guration
standpoint.
NFS
datastore
NFS
datastore
NFS
datastore
ESHi host
ESXi host
ESXi host
Each ESXi host
has a minimum of
two vmkernel
ports, and each is
physically
connected to two
Ethernet
switches.
Storage and LAN
are isolated—
physically or via
VLANs.
Ethernet
switch
Ethernet
switch
Filesystem
exported
via NFS
Each switch has a
minimum of two
connections to two
redundant front-end
array network
interfaces (across
storage processors).
NFS
server
Block storage
internal to the NFS
server that
supports the
filesystem
In the early days of using NFS with VMware, NFS was categorized as being a lower-
performance option for use with ISOs and templates but not production VMs. If production
VMs were used on NFS datastores, the historical recommendation would have been to relocate
the VM swap to block storage. Although it is true that NAS and block architectures are different
and, likewise, their scaling models and bottlenecks are generally different, this perception is
mostly rooted in how people have used NAS historically.
The reality is that it's absolutely possible to build an enterprise-class NAS infrastructure. NFS
datastores can support a broad range of virtualized workloads and do not require you to relo-
cate the VM swap. However, in cases where NFS will be supporting a broad set of production
VM workloads, you will need to pay attention to the NFS server backend design and network
infrastructure. You need to apply the same degree of care to bet-the-business NAS as you would
if you were using block storage via Fibre Channel, FCoE, or iSCSI. With vSphere, your NFS
server isn't being used as a traditional i le server, where performance and availability require-
ments are relatively low. Rather, it's being used as an NFS server supporting a mission-critical
application—in this case the vSphere environment and all the VMs on those NFS datastores.
We mentioned previously that vSphere implements an NFSv3 client using TCP. This is impor-
tant to note because it directly impacts your connectivity options. Each NFS datastore uses two
TCP sessions to the NFS server: one for NFS control trafi c and the other for NFS data trafi c.
In effect, this means that the vast majority of the NFS trafi c for a single datastore will use a
single TCP session. Consequently, this means that link aggregation (which works on a per-l ow
basis from one source to one target) will use only one Ethernet link per datastore, regardless of
how many links are included in the link aggregation group. To use the aggregate throughput
of multiple Ethernet interfaces, you need multiple datastores, and no single datastore will be
able to use more than one link's worth of bandwidth. The approach available to iSCSI (multiple
iSCSI sessions per iSCSI target) is not available in the NFS use case. We'll discuss techniques for
Search WWH ::




Custom Search