Database Reference
In-Depth Information
suffer some overloading on your backend storage processors. If your array is
supporting vSphere hosts and non-vSphere hosts on the same storage processors,
it is possible in some cases for the vSphere hosts to impact the performance of
other systems connected to the same array. For more information and instructions
on how to modify your HBA queue depth, see VMware KB 1267 and
http://longwhiteclouds.com/2013/04/25/important-default-hba-device-queue-
depth-changes-between-vsphere-versions/ .
In Table 6.9 , where the data store maximum number of VMs per host is 1, the maximum
VMs on a given data store is effectively the maximum number of hosts that can be
supported in a cluster. To increase the aggregate amount of active IOs per VM, you need
to increase the number of LUNs and ensure VMs sharing those LUNs are split across
hosts.
Table data sourced from
http://www.vmware.com/files/pdf/scalable_storage_performance.pdf , with additional
scenarios added.
Table 6.9 Calculating Load on a VMFS Volume for Sample Configurations
You don't just have to worry about your maximum LUN queue depths. You also have to
consider the queue depths of your HBA. Many HBAs have a queue depth of 4,096,
which means you'd only be able to support 64 LUNs per host at a queue depth of 64,
assuming all queues were being used. Fortunately, this is rarely the case, and
overcommitting queues at the host level has less drastic consequences than
overcommitting queues at the storage array level. Any IOs that can't be placed into the
HBA queue will be queued within your vSphere host, and the consequence is increased
IO latency, the amount of which will depend on your IO service times from your
storage. Queuing inside your vSphere host can be determined by monitoring the QUED
 
 
Search WWH ::




Custom Search