Information Technology Reference
In-Depth Information
3 Related Work
Several works in the related literature have studied algorithms to find flexible
solutions to the scheduling problem, i.e. they are able to handle some kind of
uncertainties related to faults in the system, or they are expected to be less
affected by these uncertainties than a regular scheduler.
Ali et al. [2] proposed a mathematical formulation of a metric for the ro-
bustness which can be applied to various parallel and distributed systems. The
authors apply this metric to two example systems, one of them being the in-
dependent application allocation system that we are considering in this paper.
When adopting this robustness metric, it is guaranteed that if the collective dif-
ference of the actual task execution times versus the estimated times is within
a certain calculated range, then the given makespan requirement will be met.
This metric has been used in a number of related works [7,13].
Several works aim at predicting an uncertainty value in order to include this
prediction into the scheduling knowledge. All these works focus on predicting ex-
ecution time uncertainty while considering a simple FCFS scheduling approach;
they do not consider the energy consumption of the system. Tsafrir et al. [19]
proposed a system-generated prediction system based on users' history and ap-
plied it to the EASY [8] algorithm. Using this approach they achieved a 25%
average reduction in wait time and slowdown. Tran et al. [14] presented a method
for predicting task execution time based on historical data. Using this predictor
they were able to improve accuracy by up to 32%. The CREASY scheduler by
Shmueli et al. [17] exploits knowledge on user behavior to improve QoS of the
system. Using an alternative simulation methodology called site-level simulation
they were able to improve user productivity by up to 50%. Tang et al. [18] an-
alyzed the impact of execution time estimates in scheduling algorithms on the
Blue Gene/P, designing and implementing a number of schemes for adjusting es-
timates. These schemes make use of historical workload data in order to predict
the accuracy of a task estimation considering user and project information. The
analysis showed the user estimates are highly inaccurate with only 31-33% of all
the considered tasks having an estimation accuracy of 80% or more, and up to
21-28% having an accuracy of 20% or less. The experiments showed the adjusting
schemes were able to improve up to 20% the performance of the system.
In our previous work [16], the model for multi-core computing systems that
we apply in this article was introduced. Our approach did not apply DVFS nor
other specific techniques for power/energy management. Instead, we proposed
an energy consumption model (MIN/MAX model) based on the energy required
to execute tasks at full capacity ( E MAX ), the energy when not all the available
cores of the machine are used, and the energy that each machine on the sys-
tem consumes in idle state ( E IDLE ). In our previous work, we proposed twenty
fast list scheduling methods adapted to solve the bi-objective problem we also
consider here, by simultaneously optimizing both makespan and energy con-
sumption when executing independent BoT applications on a computing system
composed of multi-core computers.
 
Search WWH ::




Custom Search