Information Technology Reference
In-Depth Information
VM is restored or migrated on the destination virtualization platform.
The OOS is the time period in which the DVM is out of service. The
Comm gives the amount of DVM data transmitted over the network. The
perf degradation measures how much the source machine is degraded
during the migration. Table 16.1 shows that our approach is much better
than the other three approaches in OOS time and the amount of data
transferred. Although researches in suspension and resume [20,21,24]
have proposed several optimization mechanisms to optimize the migra-
tion speed, the suspension and resume mechanism inherently requires
the moving of the VM image over the network, which incurs much more
communication cost. The shared storage live migration performs best in
response time because it does not move the VM image. However, because
the live memory collection and restoration is performed on both the
source and destination platforms, experiments show that the perfor-
mance of the source VM is degraded 20% to 29% during migration. With
the DVM method, since the VM reconstruction is performed only on the
destination platform, the source VM is not affected. The DVM approach
provides a signii cant improvement of overall performance over the exist-
ing methods. Also, notice that the live migration approach is limited
to cluster computers only and cannot be extended to general network
environments such as grid environments.
The above experiments demonstrate the capability of a DVM to provide
support for migration. Not only does it provides support for various
migration techniques but also improves their performance by utilizing its
novel feature of reconstructing the VM image based on the skeleton data,
particularly in wide-area image migration scene. Experiment results show
the same.
16.3.5 DVM Mobility and Communication State Transfer
The HPCM process migration system is also successfully built, which can
migrate a legacy code from one computer to another. Several critical
mechanisms and components have been developed, including the execu-
tion, memory, and communication state transfer mechanisms [25-27],
a precompiler [24], and an automatic monitoring and triggering runtime
system [28] to support automatic migration. These mechanisms and com-
ponents have been tested under MPI-2 [29] and a PVM environment with
different applications. As shown in Figure 16.10 , for the linpack bench-
mark, the overheads of homogeneous migration usually ranges from
0.08% to 0.5%, which is very low.
The communication state transfer protocols are tested with the IS
benchmark from NAS Parallel Benchmark 3.1 [30] and mpptest [31]. We
have developed a portable communication library, called MPI-Mitten, to
support migration with group communication under MPI [27]. Figure
16.11 shows the MPI-Mitten overhead during normal execution when no
migration is conducted.
 
Search WWH ::




Custom Search