Information Technology Reference
In-Depth Information
to all VMs in the resource pool. All VMs combined are allowed to consume up to this value.
In the example, the ProductionVMs resource pool does not have a CPU limit assigned. In this
case, the VMs in the ProductionVMs resource pool are allowed to consume as many CPU cycles
as the ESXi hosts in the cluster can provide. The DevelopmentVMs resource pool, on the other
hand, has a CPU Limit setting of 11,700 MHz, meaning that all the VMs in the DevelopmentVMs
resource pool are allowed to consume a maximum of 11,700 MHz of CPU capacity. With 2.93
GHz Intel Xeon CPUs, this is the approximate equivalent of one quad-core CPU.
For the most part, CPU shares, reservations, and limits behave similarly on resource pools
and on individual VMs. The same is also true for memory shares, reservations, and limits, as
you'll see in the next section.
Managing Memory Usage with Resource Pools
In the memory portion of the resource pool settings, the i rst setting is the Shares value. This set-
ting works in much the same way as memory shares worked on individual VMs. It determines
which group of VMs will be the i rst to give up memory via the balloon driver—or if memory
pressure is severe enough, activate memory compression or swap out to disk via hypervisor
swapping—in the face of contention. However, this setting sets a priority value for all VMs in the
resource pool when they compete for resources with VMs in other pools. Looking at the memory
share settings in our example (ProductionVMs = Normal and DevelopmentVMs = Low), this
means that if host memory is limited, VMs in the DevelopmentVMs resource pool that need
more memory than their reservation would have a lower priority than an equivalent VM in
the ProductionVMs resource pool. Figure 11.14, which we used previously to help explain CPU
shares on resource pool, applies here as well. As with CPU shares, you can also use the Resource
Allocation tab to explore how memory resources are assigned to resource pools or VMs within
resource pools.
The second setting is the resource pool's memory reservation. The memory Reservation value
will reserve this amount of host RAM for VMs in this resource pool, which effectively ensures
that some actual RAM is guaranteed to the VMs. As explained in the discussion on CPU reser-
vations, the Expandable check box next to Reservation Type does not limit how much memory
the resource pool can use but rather how much memory you can reserve there.
With the memory Limit value, you set a limit on how much host RAM a particular group of
VMs can consume. If administrators have been given the Create Virtual Machines permission,
then the memory Limit value would prevent those administrators from running VMs that con-
sume more than that amount of actual host RAM. In our example, the memory Limit value on
the DevelopmentVMs resource pool is set to 24,576 MB. How many VMs can administrators in
development create? They can create as many as they want.
Although this setting does nothing to limit creating VMs, it places a limit on running VMs.
So, how many can they run? The cap placed on memory use is not a per-VM setting but a
cumulative setting. Administrators might be able to run only one VM with all the memory or
multiple VMs with lower memory coni gurations. Assuming that each VM is created without an
individual memory Reservation value, the administrator can run as many VMs concurrently as
they want. However, once the VMs consume 24,576 MB of host RAM, the hypervisor will step
in and prevent the VMs in the resource group from using any additional memory. Refer back to
the discussion of memory limits in the section titled “Using Memory Limits” for the techniques
that the VMkernel uses to enforce the memory limit. If the administrator builds six VMs with
4,096 MB as the initial memory amount, then all four VMs will consume 24,576 MB (assuming
Search WWH ::




Custom Search