Information Technology Reference
In-Depth Information
implemented is finishing all jobs by user specified deadlines in a cost-efficient way.
The method based a monitor-control loop adapts to dynamic changes such as the
workload bursting and delayed instance acquisitions. In experiments, it has shown
great performance.
Scaling and Scheduling to Maximize Application Performance with Budget
Constraints
Ming Mao
[15] et al. proposed two auto-scaling mechanisms to solve the issue that
how to maximize the return from the cloud investment. They have implemented two
algorithms: the scheduling-first algorithm which distributes the application-wide
budget to each individual job, determines the task scheduling plan first and then ac-
quires the VMs, while the scaling-first algorithm determines the size and the cloud
resources first and then schedules the workflow jobs on the acquired instances. The
results show good tolerance to inaccurate parameters.
Compromise-Time-Cost Scheduling Algorithm
Liu Ke [6] et al. presented a novel compromised-time-cost scheduling algorithm
which focus on the trade-off of time and cost throughout the scheduling process. They
pay attention to the feature of cloud computing, the cost of execution and average
execution time and make the trade-off dynamically under user preferences, to resolve
the scheduling problem of instance-intensive cost-constrained workflows. The algo-
rithm can be further decomposed into two sub-algorithms: CTC-MC (Compromised-
Time-Cost algorithm Minimizing execution Cost) algorithm which minimizes the
execution cost with user designated and CTC-MT (Compromised-Time-Cost algo-
rithm Minimizing execution Time) algorithm which minimizes the execution time
within user designated budget.
3.3
Heuristic Based Workflow Scheduling Algorithms
A Particle Swarm Optimization (PSO)-Based Heuristic for Scheduling Workflow
Application in Cloud Computing Environments
In addition to optimizing execution time, the cost arising from data transfers between
resources as well as execution costs must also be taken into account. Suraj Pandey
et al. [12] proposed a particle swarm optimization (PSO) based scheduling heuristic
for data intensive applications that take into account both computation cost and data
transmission cost. They use the heuristic to minimize the total cost of execution of
scientific application workflows on Cloud computing environments. They vary the
communication cost between resources, the execution cost of compute resources and
compare the results against “Best Resource Selection” (BRS) heuristic. The experi-
ments show that PSO based task-resource mapping can achieve at least three times
cost saving as compared to BRS based mapping.
Market-Oriented-Hierarchical Scheduling
Zhangjun Wu
[7] et al. proposed a cloud workflow scheduling strategy based on intel-
ligence algorithm and adaptation-aware of cloud services composition strategy which
has been developed to scheduling the two-level cloud workflow tasks. The two-level
schedulings are service level scheduling which selects suitable cloud service for task