Information Technology Reference
In-Depth Information
Fewer Bigger Servers or More Smaller Servers?
Remember from Table 1.2 that VMware ESXi supports servers with up to 320 logical CPU cores and
up to 4TB of RAM. With vSphere DRS, though, you can combine multiple smaller servers for the
purpose of managing aggregate capacity. h is means that bigger, more-powerful servers might not
be better servers for virtualization projects. h ese larger servers, in general, are signifi cantly more
expensive than smaller servers, and using a greater number of smaller servers (often referred to as
“scaling out”) may provide greater fl exibility than a smaller number of larger servers (often referred
to as “scaling up”). h e key thing to remember is that a bigger server isn't necessarily a better server.
vSphere Storage DRS
vSphere Storage DRS takes the idea of vSphere DRS and applies it to storage. Just as vSphere
DRS helps to balance CPU and memory utilization across a cluster of ESXi hosts, Storage DRS
helps balance storage capacity and storage performance across a cluster of datastores using
mechanisms that echo those used by vSphere DRS.
We described vSphere DRS's feature called intelligent placement, which automates the place-
ment of new VMs based on resource usage within an ESXi cluster. In the same fashion, Storage
DRS has an intelligent placement function that automates the placement of VM virtual disks
based on storage utilization. Storage DRS does this through the use of datastore clusters. When
you create a new VM, you simply point it to a datastore cluster, and Storage DRS automatically
places the VM's virtual disks on an appropriate datastore within that datastore cluster.
Likewise, just as vSphere DRS uses vMotion to balance resource utilization dynamically,
Storage DRS uses Storage vMotion to rebalance storage utilization based on capacity and/or
latency thresholds. Because Storage vMotion operations are typically much more resource inten-
sive than vMotion operations, vSphere provides extensive controls over the thresholds, timing,
and other guidelines that will trigger a Storage DRS automatic migration via Storage vMotion.
Storage I/O Control and Network I/O Control
VMware vSphere has always had extensive controls for modifying or controlling the allocation of
CPU and memory resources to VMs. What vSphere didn't have prior to the release of vSphere 4.1
was a way to apply these same sort of extensive controls to storage I/O and network I/O. Storage
I/O Control and Network I/O Control address that shortcoming.
Storage I/O Control (SIOC) allows vSphere administrators to assign relative priority to stor-
age I/O as well as assign storage I/O limits to VMs. These settings are enforced cluster-wide;
when an ESXi host detects storage congestion through an increase of latency beyond a user-
coni gured threshold, it will apply the settings coni gured for that VM. The result is that
VMware administrators can ensure that the VMs that need priority access to storage resources
get the resources they need. In vSphere 4.1, Storage I/O Control applied only to VMFS storage;
vSphere 5 extended that functionality to NFS datastores.
Search WWH ::




Custom Search