Information Technology Reference
In-Depth Information
the now-running guest don't point to memory in the recently running guest) and
switch to the set of shadow page tables associated with the new guest. This op-
eration requires a substantial bookkeeping effort, adding to the CPU overhead
of virtualization. It also adds to the memory cost of virtualization: Shadow page
tables are sizable, and there must be enough of them to efficiently represent all
the active processes of each virtual machine.
Modern systems from AMD and Intel use several methods to reduce the over-
head of maintaining shadow page tables, using nested page tables and tagged page
tables with hardware support for changing address mappings. Intel's VT-i provides
tagged page tables that are marked with the virtual-processor identifiers (VPIDs)
of the guest's virtual CPU, which are operational when the VPID is running. That
removes the need to purge TLB entries on VM entry and exit. AMD's Nested Page
Tables (NPT) provides a hardware managed per-guest translation table, which
contains each guest's address spaces and provides a hardware page table walk.
Although the AMD and Intel implementations differ, both provide a hardware ca-
pability that dramatically reduces this virtualization overhead. To provide better
performance, hypervisors such as VirtualBox and VMware leverage these technol-
ogy enhancements on systems implementing them.
Oracle's virtualization technologies bypass the issue entirely. Solaris Containers
do not run a virtual memory environment nested under another virtual memory
environment, so there are only virtual and physical memory addresses: It is just as
efficient as running Solaris without Containers. The Logical Domains technology
addresses this issue by binding guest-real memories to physical ones on a one-
to-one basis. Address mappings between host-real and host-physical addresses
can be cached externally to the OS-visible TLB. Because each CPU strand on an
Oracle CMT server running Logical Domains runs no more than one domain, and
because each has its own memory mapping unit (MMU) and TLB, there is no need
to purge and replace TLBs when context switching, because CPU strands aren't
context switched at all. The hypervisor manages TLBs that translate virtual ad-
dresses directly to physical memory addresses, bypassing the need to perform a
double translation. These architectural innovations avoid the overhead of main-
taining guest virtual address spaces.
Summary and Lessons Learned
This appendix described the early hypervisors, their evolution, and the problems
and design choices that they addressed. While today's computers have become
far more powerful than in the early days, the experiences from the early virtual
machine systems continue to influence today's systems. Current virtualization
 
 
Search WWH ::




Custom Search