Information Technology Reference
In-Depth Information
to hosts that can belong to different Data Centers. Finally, the CustomerDCBro-
ker class models the QoS requirements customer behavior, negotiates with the
cloud coordinator and requests computations.
5.1 Evaluation in Two Scenarios
In order to evaluate the effectiveness of our Federated Application Provisioning
strategy (FAP), we used a simulation setup that is similar to the one used in
[3]. The simulation environment included 2 Data Centers (DC1 and DC2), with
100 hosts each. These hosts had one CPU core with 1000 MIPS, 2GB of RAM
and 1TB of storage. The workload model included provisioning for 400 VMs,
where each VM requested one CPU core, 256 MB of RAM and 1GB of storage.
The CPU utilization distribution was set to the Poisson distribution, where
task required 150 MIPS or 10 minutes to complete execution. We assumed CPU
utilization of 20, 40, 60, 80 and 100% and a global energy consumption threshold
of 3 kWh of energy per data center. Initially, the provisioner allocates as many
as possible virtual machines on a single host, without violating any constraint
of the host. The SLA was defined in terms of response time (10 minutes).
In the first evaluation scenario, there are two Data Centers (DC1 and DC2)
and tasks are always submitted to DC1. If DC1 becomes overloaded, VMs are
migrated from DC1 to DC2. The simulation was repeated 10 times and the mean
values for energy consumption without our mechanism using only DC1 (trivial),
and with our Federated Application Provision strategy (FAP) mechanism are
presented in Figures 2 (a), (b) and (c).
Figure 2(a) shows that the proposed provision technique is able to reduce the
total power consumption of the Data Centers, without SLA violation. In this
case, an average reduction of 53.37% in power consumption was achieved since
DC1 consumed more than 9kWh with the trivial approach and no more than 4.9
kWh was consumed by both Data Centers with our approach (2.92 kWh for DC1
and 1.98 kWh for DC2). In order to achieve this, DC1 tried first to maximize
the usage of its resources and to consume the limit of energy power without
violating the SLAs. DC2 was used only when DC1 was overloaded, if DC1 was
in the imminence of SLA violation or when the energy consumption was close
to the limit.
Figure 2(b) presents the number of VM migrations when our mechanism is
used. It can be seen that the number of migrations decreases as the threshold of
CPU usage increases. This result was expected since with more CPU capacity,
the allocation policy tends to use it and allocate more VMs in the same physical
machine. In Figure 2(c), we measured the wallclock time needed to execute 400
tasks, with our mechanism (FAP) and without our mechanism (trivial). It can
be seen that FAP increases the whole execution time. This occurs because of the
overhead caused by the VMs migrations between data centers, and the negotia-
tions between the CLU and the CSP agents. Nevertheless, this increase in less
than 22%, since the wallclock execution time without and with the mechanism
is 21 . 5 min and 27 . 4 min, respectively, for 100% CPU utilization. We consider
 
Search WWH ::




Custom Search