Information Technology Reference
In-Depth Information
Removing a VMFS Datastore
Removing a VMFS datastore is, fortunately, as straightforward as it seems. To remove a VMFS
datastore, simply right-click the datastore object and select All vCenter Actions
Delete
Datastore. The vSphere Web Client will prompt for coni rmation—reminding you that you
will lose all the i les associated with all VMs on this datastore—before actually deleting the
datastore.
As with many of the other datastore-related tasks we've shown you, the vSphere Web Client
will trigger a VMFS rescan for other ESXi hosts so that all hosts are aware that the VMFS data-
store has been deleted.
Like resignaturing a datastore, deleting a datastore is irreversible. Once you delete a data-
store, you can't recover the datastore or any of the i les that were stored in it. Be sure to double-
check that you're deleting the right datastore before you proceed!
Let's now shift from working with VMFS datastores to working with another form of block-
based storage, albeit one that is far less frequently used: raw device mappings, or RDMs.
Working with Raw Device Mappings
Although the concept of shared pool mechanisms (like VMFS or NFS datastores) for VMs works
well for many use cases, there are certain use cases where a storage device must be presented
directly to the guest operating system (guest OS) inside a VM.
vSphere provides this functionality via a raw device mapping (RDM). RDMs are presented to
your ESXi hosts and then via vCenter Server directly to a VM. Subsequent data I/O bypasses the
VMFS and volume manager completely, though management is handled via a mapping i le that
is stored on a VMFS volume.
In-Guest iSCSI as an Alternative to RDMs
In addition to using RDMs to present storage devices directly to the guest OS inside a VM, you
can use in-guest iSCSI software initiators. We'll provide more information on that scenario in the
section “Using In-Guest iSCSI Initiators” later in this chapter.
RDMs should be viewed as a tactical tool in the vSphere administrators' toolkit rather than
as a common use case. A common misconception is that RDMs perform better than VMFS. In
reality, the performance delta between the storage types is within the margin of error of tests.
Although it is possible to oversubscribe a VMFS or NFS datastore (because they are shared
resources) and not an RDM (because it is presented to specii c VMs only), this is better handled
through design and monitoring rather than through the extensive use of RDMs. In other words,
if your concerns about oversubscription of a storage resource are driving the choice of an RDM
over a shared datastore model, simply choose to not put multiple VMs in the pooled datastore.
You can coni gure RDMs in two different modes:
Physical Compatibility Mode (pRDM) In this mode, all I/O passes directly through to the
underlying LUN device, and the mapping i le is used solely for locking and vSphere man-
agement tasks. Generally, when a storage vendor says “RDM” without specifying further,
 
Search WWH ::




Custom Search