Information Technology Reference
In-Depth Information
Figure 6.63 and Figure 6.64 also show the date and time of the last compliance check. Note
that you can force a compliance check by clicking the Refresh hyperlink.
When we discuss creating VMs and adding virtual disks to a VM in Chapter 9, we'll revisit
the concept of policy-driven storage and VM storage policies.
In addition to the various methods we've shown you so far for accessing storage from a VM,
there's still one method left: using an in-guest iSCSI initiator.
Using In-Guest iSCSI Initiators
We mentioned in the section “Working with Raw Device Mappings” that RDMs were not the
only way to present storage devices directly to a VM. You can also use an in-guest iSCSI initiator
to bypass the hypervisor and access storage directly.
The decision whether to use in-guest iSCSI initiators will depend on numerous factors,
including, but not limited to, your storage coni guration (does your array support iSCSI?), your
networking coni guration and policy (do you have enough network bandwidth to support the
additional iSCSI trafi c on the VM-facing networks?), your application needs (do you have appli-
cations that need or are specii cally designed to work with in-guest iSCSI initiators, or applica-
tions that need RDMs that could work with in-guest iSCSI initiators instead?), consolidation
target (can you afford the extra CPU and memory overhead in the VMs as a result of using an
in-guest iSCSI initiator?), and your guest OS (is there a software iSCSI initiator for your particu-
lar guest OS?).
Should you decide to use an in-guest iSCSI initiator, keep in mind the following tips:
The storage that you access via the in-guest initiator will be separate from the NFS and
VMFS datastores you'll use for virtual disks. Keep this in mind so that you can plan your
storage coni guration accordingly.
You will be placing more load and more visibility on the VM networks because all iSCSI
trafi c will bypass the hypervisor. You'll also be responsible for coni guring and supply-
ing redundant connections and multipathing separately from the coni guration you might
have supplied for iSCSI at the hypervisor level. This could result in a need for more physi-
cal NICs in your server than you had planned.
If you are using 10 Gigabit Ethernet, you might need to create a more complex QoS/
Network I/O Control coni guration to ensure that the in-guest iSCSI trafi c is appropriately
prioritized.
You'll lose Storage vMotion functionality for storage accessed via the in-guest iSCSI initia-
tor because the hypervisor is not involved.
For the same reason, vSphere snapshots would not be supported for in-guest iSCSI
initiator-access storage.
As with so many different areas in vSphere, there is no absolute wrong or right choice, only
the correct choice for your environment. Review the impact of using iSCSI initiators in the guest
OSes, and if it makes sense for your environment, proceed as needed.
Search WWH ::




Custom Search