Information Technology Reference
In-Depth Information
When setting the EVC mode for a cluster, keep in mind that some CPU-specii c features—
such as newer multimedia extensions or encryption instructions—could be disabled when
vCenter Server and ESXi disable them via EVC. VMs that rely on these advanced extensions
might be affected by EVC, so be sure that your workloads won't be adversely affected before set-
ting the cluster's EVC mode.
EVC is a powerful feature that assures vSphere administrators that vMotion compatibility
will be maintained over time, even as hardware generations change. With EVC, you won't have
to remember what life's like without vMotion.
Traditional vMotion only helps with balancing CPU and memory load. In the next section
we'll discuss a method for manually balancing storage load.
Using Storage vMotion
vMotion and Storage vMotion are like two sides of the same coin. Traditional vMotion migrates
a running VM from one physical host to another, moving CPU and memory usage between
hosts but leaving the VM's storage unchanged. This allows you to manually balance the CPU
and memory load by shifting VMs from host to host. Storage vMotion, however, migrates a
running VM's virtual disks from one datastore to another datastore but leaves the VM execut-
ing—and therefore using CPU and memory resources—on the same ESXi host. This allows you
to manually balance the “load” or utilization of a datastore by shifting a VM's storage from one
datastore to another. Like vMotion, Storage vMotion is a live migration; the VM does not incur
any outage during the migration of its virtual disks from one datastore to another.
So how does Storage vMotion work? The process is relatively straightforward:
1. First, vSphere copies over the nonvolatile i les that make up a VM: the coni guration i le
(VMX), VMkernel swap, log i les, and snapshots.
2. Next, vSphere starts a ghost or shadow VM on the destination datastore. Because this
ghost VM does not yet have a virtual disk (that hasn't been copied over yet), it sits idle
waiting for its virtual disk.
3. Storage vMotion i rst creates the destination disk. Then a mirror device—a new driver
that mirrors I/Os between the source and destination—is inserted into the data path
between the VM and the underlying storage.
SVM Mirror Device Information in the Logs
If you review the vmkernel log fi les on an ESXi host during and after a Storage vMotion operation,
you will see log entries prefi xed with SVM that show the creation of the mirror device and that
provide information about the operation of the mirror device.
4. With the I/O mirroring driver in place, vSphere makes a single-pass copy of the virtual
disk(s) from the source to the destination. As changes are made to the source, the I/O
mirror driver ensures that those changes are also rel ected at the destination.
 
Search WWH ::




Custom Search