Information Technology Reference
In-Depth Information
Master It What would best identify an oversubscribed NFS volume from a performance
standpoint? How would you identify the issue? What is it most likely to be? What are two
possible corrective actions you could take?
Solution The workload in the datastore is reaching the maximum bandwidth of a single
link. The easiest way to identify the issue would be using the vCenter performance charts
and examining the VMkernel NIC's utilization. If it is at 100 percent, the only options are
to upgrade to 10 GbE or to add another NFS datastore, add another VMkernel NIC, follow
the load-balancing and high-availability decision tree to determine whether NIC teaming
or IP routing would work best, and i nally, use Storage vMotion to migrate some VMs to
another datastore (remember that the NIC teaming/IP routing works for multiple data
stores, not for a single datastore). Remember that using Storage vMotion adds additional
work to an already busy datastore, so consider scheduling it during a low I/O period,
even though it can be done live.
Coni gure storage at the VM layer. With datastores in place, create VMs. During the cre-
ation of the VMs, place VMs in the appropriate datastores, and employ selective use of RDMs
but only where required. Leverage in-guest iSCSI where it makes sense, but understand the
impact to your vSphere environment.
Master It Without turning the machine off, convert the virtual disks on a VMFS volume
from thin to thick (eagerzeroedthick) and back to thin.
Solution Use Storage vMotion and select the target disk format during the Storage vMo-
tion process.
Master It Identify where you would use a physical compatibility mode RDM, and con-
i gure that use case.
Solution One use case would be a Microsoft cluster (either W2K3 with MSCS or W2K8
with WFC). You should download the VMware Microsoft clustering guide and follow
that use case. Other valid answers are a case where virtual-to-physical mobility of the
LUNs is required or one where a Solutions Enabler VM is needed.
Leverage best practices for SAN and NAS storage with vSphere. Read, follow, and lever-
age key VMware and storage vendors' best practices/solutions guide documentation. Don't
oversize up front, but instead learn to leverage VMware and storage array features to monitor
performance, queues, and backend load—and then nondisruptively adapt. Plan for perfor-
mance i rst and capacity second. (Usually capacity is a given for performance requirements
to be met.) Spend design time on availability design and on the large, heavy I/O VMs, and
use l exible pool design for the general-purpose VMFS and NFS datastores.
Master It Quickly estimate the minimum usable capacity needed for 200 VMs with
an average VM size of 40 GB. Make some assumptions about vSphere snapshots. What
would be the raw capacity needed in the array if you used RAID 10? RAID 5 (4+1)? RAID
6 (10+2)? What would you do to nondisruptively cope if you ran out of capacity?
Solution Using rule-of-thumb math, 200 × 40 GB = 8 TB × 25 percent extra space (snap-
shots, other VMware i les) = 10 TB. Using RAID 10, you would need at least 20 TB raw.
Using RAID 5 (4+1), you would need 12.5 TB. Using RAID 6 (10+2), you would need 12 TB.
If you ran out of capacity, you could add capacity to your array and then add datastores
Search WWH ::




Custom Search