Database Reference
In-Depth Information
can become a little more complicated when features such as array auto-tiering,
compression, and de-duplication are used. As part of designing your storage
environment, we recommend you specify an SLA for each type of data store that is
backed by a different class of storage (or different storage policy). As part of the SLA,
calculate the IOPS per TB achievable and make this known to the application owners.
Knowing the IOPS per TB achievable and required will also help if you are looking to
host any SQL servers in a cloud environment. Whatever the IOPS per TB is for a
particular data store, it will potentially be divided by the number of hosts sharing the
data store, so you will most likely not be able to run a single host to the limit, unless
there is only one VM on the data store.
In many cases, you can reduce the number of data stores you need to manage by
increasing the queue depth per HBA LUN on each vSphere host. This allows you to
place additional virtual disks on the data store, but without sacrificing the aggregate
number of available storage IO queues. We recommend you do not increase the
aggregate queue depth to the storage processors. By this we mean that by reducing the
number of LUNs and increasing the queue depth per LUN, the total queue depth to the
storage processor ports should be the same.
Caution
Be aware that if your storage is under-configured or already overloaded,
increasing the queue depths won't help you. You need to be aware of any queue
depth limits on your storage array and processor ports and make sure that you
don't exceed them. If you overload a traditional storage processor and get a
QFULL SCSI sense code, the storage controller (HBA) will drop the queue depth
to 1 and slowly increase it over time. Your performance during this period will
suffer significantly (like falling off a cliff). We recommend that you consult with
your storage team, storage vendor, and storage documentation to find out the
relevant limits for your storage system before changing any queue depths. This
will help avoid any possible negative performance consequences that would
result from overloading your storage. Some storage arrays have a global queue
per storage port, and some have a queue per LUN. Whether your storage is Fibre
Channel, FCoE, or iSCSI, you need to understand the limits before you make any
changes.
Tip
The default queue depth on a QLogic HBA changed from vSphere 4.x to 5.x from
32 to 64. Emulex queue depth is still 32 by default (two reserved, leaving 30 for
IO), and Brocade is 32. If you didn't know this and simply upgraded, you could
 
Search WWH ::




Custom Search