Database Reference
In-Depth Information
different underlying NUMA architecture. This can result is a negative performance
impact on the virtual machine. The only time this setting is updated, by default, is when
the vCPU count is modified.
Note
Our recommendation is to leave Cores per Socket at the default setting
unless you have a reason to change it, such as licensing.
Real World
Let's be honest for a minute. Let's say that 21 months (I like odd numbers) after
modifying this setting on a SQL Server virtual machine, you introduce new
hardware into your vSphere cluster with a different underlying NUMA
architecture, and you vMotion the SQL virtual machine to the new hardware. Are
you going to remember to change the setting? When the DBA team calls and says
that despite you moving the SQL virtual machine to newer, bigger, faster
hardware, performance is worse, are you going to remember that the Cores per
Socket setting may be causing this performance dip? If you need to adjust the
parameter, adjust it. Just make sure you have well-defined operational controls in
place to manage this as your environment grows.
If possible, when selecting physical servers for use in your clusters, attempt to adhere to
the same underlying NUMA architecture. We know, this is easier said than done.
Initially when a cluster is built, this is more realistic; however, as time is introduced
into the cluster and servers need to be added for capacity or replaced for life cycle, this
makes adhering to the same NUMA architecture more difficult.
One final note on NUMA. We are often asked, “How do I figure out my server's NUMA
node size?” The best way is to work with your server provider and have them detail out
sockets, cores, and memory that make up a NUMA node. This is important to ask,
because the size of a NUMA node is not always the number of cores on a chip; take, for
example, the AMD Piledriver processor, which as two six-core processors on a single
socket. AMD Bulldozer has two eight-core processors on a single physical socket, also
making it two NUMA nodes.
Hyper-Threading Technology
Hyper-Threading Technology (HTT) was invented by Intel and introduced in the Xeon
processors in 2002. At a high level, HTT has two logical processors residing on the
same physical core, and these two logical resources share the same resources on the
core. The advantage of this is if the operating system is able to leverage HTT, the
 
 
Search WWH ::




Custom Search