Database Reference
In-Depth Information
Figure 6.27 Multiple VMs on different ESXi hosts per data store.
Note
Prior to the introduction of VMware APIs for Array Integration (VAAI) and
VMFS5, VMware used to recommend that no more than 25 VMDKs be
hosted on a single data store. This no longer applies if you have a VAAI-
capable array and a freshly created (rather than upgraded from VMFS3)
VMFS5 data store. It's unlikely you would want to go as high as this for
your SQL servers for production, but it might be applicable for Dev and
Test.
Tip
To ensure that two VMs that are sharing the same data store do not reside on the
same vSphere host, you can use vSphere DRS Rules to keep the VMs separated.
This will reduce the chance of queue contention between the two SQL servers
that might occur if they were on the same host. Having too many DRS Rules can
impact the effectiveness of vSphere DRS and increase management complexity,
so it's use should be kept to a minimum. If you get your performance calculations
slightly wrong and you discover one of the VMDKs is busier than you expected,
you could easily migrate it to another data store using Storage vMotion. This can
be done online and is nondisruptive to SQL. Some additional IO latency may be
seen during the migration process.
Storage IO Control—Eliminating the Noisy Neighbor
One of the potential impacts of working in a shared storage environment is having one
VM monopolize storage performance resources to the detriment of other VMs. We call
this the Noisy Neighbor Effect. If one VM suddenly starts issuing a lot more IO than all
the other VMs, it could potentially slow down other VMs on the same data store, or on
the same array. To combat this problem, VMware introduced Storage IO Control
(SIOC) in vSphere 4.1 and has made enhancements to it in vSphere 5.x.
Where there is more than one VM sharing a data store and SIOC is enabled, if the
latency exceeds a threshold (default 30ms), vSphere will take action to reduce the
latency. The way it reduces the latency is by dynamically modifying the device queue
depth of each of the hosts sharing the data store. What it is doing is in effect trading off
throughput for latency. The result is, individual VMs may see higher latency from
storage but they each get their fair share of the storage performance resources.
SIOC should be activated only to deal with unexpected peaks of IO activity and should
 
Search WWH ::




Custom Search