Information Technology Reference
In-Depth Information
You must have appropriate support for iBFT in the hardware. One might argue that using
Auto Deploy would provide much of the same benei t as booting from an iSCSI SAN, but each
approach has its advantages and disadvantages.
iSCSI is the last of the block-based shared storage options available in vSphere; now we move
on to the Network File System (NFS), the only NAS protocol that vSphere supports.
Jumbo Frames Are Supported
VMware ESXi does support jumbo frames for all VMkernel tra c, including both iSCSI and NFS,
and they should be used when needed. However, it is then critical to confi gure a consistent, larger
maximum transfer unit (MTU) size on all devices in all the possible networking paths; otherwise,
Ethernet frame fragmentation will cause communication problems. Depending on the network
hardware and tra c type, jumbo frames may or may not yield signifi cant benefi ts. As always, you
will need to weigh the benefi ts against the operational overhead of supporting this confi guration.
Understanding the Network File System
NFS protocol is a standard originally developed by Sun Microsystems to enable remote systems
to access a i le system on another host as if it were locally attached. vSphere implements a client
compliant with NFSv3 using TCP.
When NFS datastores are used by vSphere, no local i le system (such as VMFS) is used. The
i le system is on the remote NFS server. This means that NFS datastores need to handle the
same access control and i le-locking requirements that vSphere delivers on block storage using
the vSphere Virtual Machine File System, or VMFS (we'll describe VMFS in more detail later in
this chapter in the section “Examining the vSphere Virtual Machine File System”). NFS servers
accomplish this through traditional NFS i le locks.
The movement of the i le system from the ESXi host to the NFS server also means that you
don't need to handle zoning/masking tasks. This makes an NFS datastore one of the easiest
storage options to simply get up and running. On the other hand, it also means that all of the
high availability and multipathing functionality that is normally part of a Fibre Channel, FCoE,
or iSCSI storage stack is replaced by the networking stack. We'll discuss the implications of this
in the section titled “Crafting a Highly Available NFS Design.”
Figure 6.16 shows the topology of an NFS coni guration. Note the similarities to the topolo-
gies in Figure 6.8 and Figure 6.13.
Technically, any NFS server that complies with NFSv3 over TCP will work with vSphere
(vSphere does not support NFS over UDP), but similar to the considerations for Fibre Channel
and iSCSI, the infrastructure needs to support your entire vSphere environment. Therefore, we
recommend you use only NFS servers that are explicitly on the VMware HCL.
Using NFS datastores moves the elements of storage design associated with LUNs from the
ESXi hosts to the NFS server. Instead of exposing block storage—which uses the RAID tech-
niques described earlier for data protection—and allowing the ESXi hosts to create a i le system
(VMFS) on those block devices, the NFS server uses its block storage—protected using RAID—
and creates its own i le systems on that block storage. These i le systems are then exported via
NFS and mounted on your ESXi hosts.
Search WWH ::




Custom Search