Hardware Reference
In-Depth Information
■ The increasing importance of isolation and security in modern systems
■ The failures in security and reliability of standard operating systems
■ The sharing of a single computer among many unrelated users, such as in a datacenter or
cloud
■ The dramatic increases in the raw speed of processors, which make the overhead of VMs
more acceptable
The broadest definition of VMs includes basically all emulation methods that provide a
standard software interface, such as the Java VM. We are interested in VMs that provide a
complete system-level environment at the binary instruction set architecture (ISA) level. Most
often, the VM supports the same ISA as the underlying hardware; however, it is also possible
to support a different ISA, and such approaches are often employed when migrating between
ISAs, so as to allow software from the departing ISA to be used until it can be ported to the
new ISA. Our focus here will be in VMs where the ISA presented by the VM and the underly-
ing hardware match. Such VMs are called (Operating) System Virtual Machines . IBM VM/370,
VMware ESX Server, and Xen are examples. They present the illusion that the users of a VM
have an entire computer to themselves, including a copy of the operating system. A single
computer runs multiple VMs and can support a number of different operating systems (OSes).
On a conventional platform, a single OS “owns” all the hardware resources, but with a VM
multiple OSes all share the hardware resources.
The software that supports VMs is called a virtual machine monitor (VMM) or hypervisor ; the
VMM is the heart of virtual machine technology. The underlying hardware platform is called
the host , and its resources are shared among the guest VMs. The VMM determines how to map
virtual resources to physical resources: A physical resource may be time-shared, partitioned,
or even emulated in software. The VMM is much smaller than a traditional OS; the isolation
portion of a VMM is perhaps only 10,000 lines of code.
In general, the cost of processor virtualization depends on the workload. User-level
processor-bound programs, such as SPEC CPU2006, have zero virtualization overhead be-
cause the OS is rarely invoked so everything runs at native speeds. Conversely, I/O-intensive
workloads generally are also OS-intensive and execute many system calls (which doing I/O
requires) and privileged instructions that can result in high virtualization overhead. The over-
head is determined by the number of instructions that must be emulated by the VMM and
how slowly they are emulated. Hence, when the guest VMs run the same ISA as the host, as
we assume here, the goal of the architecture and the VMM is to run almost all instructions
directly on the native hardware. On the other hand, if the I/O-intensive workload is also I/O-
bound , the cost of processor virtualization can be completely hidden by low processor utiliza-
tion since it is often waiting for I/O.
Although our interest here is in VMs for improving protection, VMs provide two other be-
neits that are commercially significant:1.
1. Managing software —VMs provide an abstraction that can run the complete software stack,
even including old operating systems such as DOS. A typical deployment might be some
VMs running legacy OSes, many running the current stable OS release, and a few testing
the next OS release.
2. Managing hardware —One reason for multiple servers is to have each application running
with its own compatible version of the operating system on separate computers, as this
separation can improve dependability. VMs allow these separate software stacks to run
independently yet share hardware, thereby consolidating the number of servers. Another
example is that some VMMs support migration of a running VM to a different computer,
either to balance load or to evacuate from failing hardware.
Search WWH ::




Custom Search