Information Technology Reference
In-Depth Information
fast to replace a failed ESXi host. During this time, vSphere HA will make sure the VMs are run-
ning on the other ESXi hosts in the cluster. However, taking advantage of new features within
vSphere 5.5 such as VSAN will certainly require careful consideration. Storage underpins your
entire vSphere environment. Make the effort to ensure that your shared storage design is robust,
taking into consideration internal- and external-based shared storage choices.
No Local Storage? No Problem!
What if you don't have local storage? (Perhaps you have a diskless blade system, for example.)
h ere are many options for diskless systems, including booting from Fibre Channel/iSCSI SAN
and network-based boot methods like vSphere Auto Deploy (discussed in Chapter 2, “Planning
and Installing VMware ESXi”). h ere is also the option of using USB boot, a technique that we've
employed on numerous occasions in lab and production environments. Both Auto Deploy and USB
boot give you some fl exibility in quickly reprovisioning hardware or deploying updated versions
of vSphere, but there are some quirks, so plan accordingly. Refer to Chapter 2 for more details on
selecting the confi guration of your ESXi hosts.
Shared storage is the basis for most vSphere environments because it supports the VMs
themselves and because it is a requirement for many of vSphere's features. Shared external stor-
age in SAN coni gurations (which encompasses Fibre Channel, FCoE, and iSCSI) and NAS (NFS)
is always highly consolidated. This makes it efi cient. SAN/NAS or VSAN can take the direct
attached storage in physical servers that are 10 percent utilized and consolidate them to 80 per-
cent utilization.
As you can see, shared storage is a key design point. Whether it's shared external storage or
you're planning to share the local storage system out, it's important to understand some of the
array architectures that vendors use to provide shared storage to vSphere environments. The
high-level overview in the following section is neutral on specii c storage array vendors because
the internal architectures vary tremendously.
Defi ning Common Storage Array Architectures
This section is remedial for anyone with basic storage experience, but it's needed for vSphere
administrators with no storage knowledge. For people unfamiliar with storage, the topic can be
a bit disorienting at i rst. Servers across vendors tend to be relatively similar, but the same logic
can't be applied to the storage layer because core architectural differences between storage ven-
dor architectures are vast. In spite of that, storage arrays have several core architectural elements
that are consistent across vendors, across implementations, and even across protocols.
The elements that make up a shared storage array consist of external connectivity, storage
processors, array software, cache memory, disks, and bandwidth:
External Connectivity The external (physical) connectivity between the storage array
and the hosts (in this case, the ESXi hosts) is generally Fibre Channel or Ethernet, though
Ini niBand and other rare protocols exist. The characteristics of this connectivity dei ne the
maximum bandwidth (given no other constraints, and there usually are other constraints) of
the communication between the ESXi host and the shared storage array.
 
Search WWH ::




Custom Search