Information Technology Reference
In-Depth Information
Better availability:
h rough active management of intermittent path behavior
h rough more rapid path state detection
h rough automated path discovery behavior without manual rescan
Better performance:
h rough better path selection using weighted algorithms, which is critical in cases where
the paths are unequal (ALUA).
h rough monitoring and adjusting the ESXi host queue depth to select the path for a given
I/O, shifting the workload from heavily used paths to lightly used paths.
With some arrays by predictive optimization based on the array port queues. (h e array
port queues are generally the fi rst point of contention and tend to aff ect all the ESXi hosts
simultaneously; without predictive advance handling, they tend to cause simultaneous
path choice across the ESXi cluster.)
Previously in this chapter, in the section on VMFS, we mentioned that one potential advan-
tage to having a VMFS datastore spanned across multiple extents on multiple LUNs would be to
increase the parallelism of the LUN queues. In addition, in this section you've heard us mention
how a third-party MPP might make multipathing decisions based on host or target queues. Why
is queuing so important? We'll review queuing in the next section.
The Importance of LUN Queues
Queues are an important construct in block storage environments (across all protocols, includ-
ing Fibre Channel, FCoE, and iSCSI). Think of a queue as a line at the supermarket checkout.
Queues exist on the server (in this case the ESXi host), generally at both the HBA and LUN lev-
els. They also exist on the storage array. Every array does this differently, but they all have the
same concept. Block-centric storage arrays generally have these queues at the target ports, array-
wide, at the array LUN levels, and i nally at the spindles themselves. File-centric storage arrays
generally have queues at the target ports and array-wide, but abstract the array LUN queues
because the LUNs actually exist as i les in the i le system. However, i le-centric designs have
internal LUN queues underneath the i le systems themselves and then ultimately at the spindle
level—in other words, it's internal to how the i le server accesses its own storage.
The queue depth is a function of how fast things are being loaded into the queue and how
fast the queue is being drained. How fast the queue is being drained is a function of the amount
of time needed for the array to service the I/O requests. This is called the service time , and in the
supermarket checkout it is the speed of the person behind the checkout counter (ergo, the array
service time).
Can I View the Queue?
To determine how many outstanding items are in the queue, use resxtop, press U to get to the
storage screen, and look at the QUED column.
Search WWH ::




Custom Search