Database Reference
In-Depth Information
In this chapter, we present the CARE framework (Cloud Architecture Runtime
Evaluation) [ 241 ] that has been developed as an attempt to address the following
research questions:
￿
What are the performance characteristics of different cloud platforms, including
cloud hosting servers and cloud databases?
￿
What availability and reliability characteristics do cloud platforms typically
exhibit? What sort of faults and errors may be encountered when services are
running on different cloud platforms under high request volume or high stress
situations?
￿
What are some of the reasons behind the faults and errors? What are the
architecture internal insights that may be deduced from these observations?
￿
What are the software engineering challenges that developers and architects
could face when using cloud platforms as their production environment for
service delivery?
An empirical experiment has been carried out by applying the CARE framework
against three different cloud platforms. The result facilitates an in-depth analysis
of the major runtime performance differences under various simulated conditions,
providing useful information for decision makers on the adoption of different cloud
computing technologies.
This chapter presents the CARE evaluation framework in Sect. 4.1 , followed
by discussions on the empirical experiment set up and its execution in Sect. 4.2 .
Section 4.3 presents the experimental results of all test sets and error analysis
captured during the tests. Section 4.4 discusses the application experience of CARE
and evaluates the CARE approach.
4.1
The CARE Framework
The CARE framework is a performance evaluation approach specifically tailored
for evaluating across a range of cloud platform technologies. The CARE framework
exhibits the following design principles and features:
￿
Common and consistent test interfaces across all test targets by employing
web services and RESTful APIs. This is to ensure that, as much as possible,
commonality across the tests against different platforms is maintained, hence
resulting in a fairer comparison.
￿
Minimal business logic code is placed in the test harness, in order to minimize
variations in results caused by business logic code. This is to ensure that
performance results can be better attributed to the performance characteristics
of the underlying cloud platform as opposed to the test application itself.
￿
Use of canonical test operations, such as read, write, update, delete. The principle
enables simulating a wide range of cloud application workloads using composites
of these canonical operations. This approach provides a precise way of describing
the application profile.
Search WWH ::




Custom Search