Database Reference
In-Depth Information
accomplish this allocation. The reason for this is data locality. Also, remember that
hyper-threading is ignored during the factoring of NUMA node assignment, and the
number of cores is used. However, if the memory of a NUMA node is insufficient, then
memory from a remote NUMA node will be accessed. Often, people forget to include
the memory size of a NUMA node when designing their systems. It is important to note
with Wide NUMA managing a Wide VM that the memory is interleaved across the two
NUMA nodes. How did VMware address this? They introduced the second feature,
vNUMA
VMware introduced vNUMA in vSphere 5.0. vNUMA requires virtual hardware
version 8.0 (or higher). vNUMA exposes the underlying physical servers' NUMA
architecture to the guest virtual machine. With most major operating systems being
NUMA aware, exposing NUMA to the guest operating system allows for better memory
locality. vNUMA is automatically enabled for guest virtual machines with nine or more
vCPUs. Why the reason for nine? Well, vSphere 4.1 had support for up to eight vCPUs,
and vSphere 5.0 introduced support for 32 vCPUs. Therefore, to avoid any legacy
issues with upgrades, this feature was enabled, by default, for virtual machines with
nine or more vCPUs.
Note
vNUMA is disabled when CPU Hot Plug is enabled.
To change the default vNUMA setting of requiring nine or more vCPUs, open the
vSphere Web Client and navigate to the virtual machine you want to modify. Click Edit
Properties , then click the VM Options tab, expand Advanced , and click Edit
Configuration to the right of Configuration Parameters . If the
numa.vcpu.maxPerVirtualNode parameter is not present, click Add Row and manually
add the parameter. See Figure 5.8 for more information. In Figure 5.8 , we inserted the
row and configured it for eight cores.
 
Search WWH ::




Custom Search