Database Reference
In-Depth Information
performance, your SQL VMs will also have very large compute footprints in terms of
memory and CPU. However, if this is required to meet your performance requirements,
you may find that you need to design for a smaller number of hosts per cluster, and
potentially have more clusters. This layout assumes that each VMDK will use the full
queue depth of each data store, which is often not the case. You may find that you need
to reduce the queue depth per LUN to avoid overloading your backend storage ports,
which defeats the purpose of having more LUNs in the first place.
Often the need for extreme performance is driven by many database instances or
schemas running on a single VM, and in these cases it may be a better design choice to
split up those instances into multiple VMs. Because VMDKs (not RDMs) are used, it is
possible to start with the example in Figure 6.19 and increase the number of data stores
if required at a later time. You can migrate the VMDKs without any downtime by using
VMware Storage vMotion.
Figure 6.19 Virtual machines sharing data stores.
Up until now we have provided examples where the storage is dedicated to each SQL
Server. This is a very traditional approach to SQL storage architecture. When you have
a very good baseline and understanding of your inventory and workload characteristics,
it is a good approach, but it has a couple of potential drawbacks. The first drawback is
manageability. You must have a number of data stores supporting each VM, which
produces more data stores to manage, and may not balance performance and capacity
efficiently between many SQL Server VMs. You may end up with many different data
store sizes for each of the different databases, which provides little opportunity for
standardization. This may be more of a problem in a smaller environment because there
 
Search WWH ::




Custom Search