Database Reference
In-Depth Information
10.5 Adaptive Consistency ................................................................................... 343
10.5.1 RedBlue Consistency ........................................................................344
10.5.2 Consistency Rationing ...................................................................... 345
10.6 Harmony: Automated Self-Adaptive Consistency ........................................ 346
10.6.1 Why Use the Stale Reads Rate to Define the Consistency
Requirements of an Application?...................................................... 347
10.6.2 Stale Reads Probabilistic Estimation ................................................ 347
10.6.3 Harmony Implementation ................................................................. 348
10.6.4 Harmony Evaluation ......................................................................... 349
10.6.4.1 Throughput and Latency .................................................... 349
10.6.4.2 Staleness ............................................................................. 350
10.7 Conclusion .................................................................................................... 352
References .............................................................................................................. 353
10.1 INTRODUCTION
Cloud computing has recently emerged as a popular paradigm for harnessing a large
number of commodity machines. In this paradigm, users acquire computational and
storage resources based on a pricing scheme similar to the economic exchanges in
the utility market place: users can lease the resources they need in a pay-as-you-go
manner [1]. For example, the Amazon Simple Storage Service (S3) is using a pric-
ing scheme based on data size/transfer per Gigabyte (e.g., $0.095 per GB for the
first terabyte and $0.020 per GB inter-region transfer [2]) and the Amazon Elastic
Compute Cloud (EC2) service is using a pricing scheme based on virtual machine
(VM) hours (e.g., $0.065 per small instance hour [3]).
Meanwhile, we have entered the era of Big Data, where the size of data gen-
erated by digital media, social networks, and scientific instruments is increasing
at an extreme rate. With data growing rapidly and applications becoming more
data-intensive, many organizations have moved their data to the cloud, aiming to
provide cost-efficient, scalable, reliable, and highly available services (Animoto*,
a start-up for video generating and sharing, had successfully used Amazon Web
Services to cope with the huge increase of users from 5000 a day to 250,000 a day
without investing any money in building new servers [4], and since then they have
shifted their service completely to Amazon). Cloud providers allow service provid-
ers to deploy and customize their environment in multiple physically separate data-
centers to meet the ever-growing user needs. Services therefore can replicate their
state across geographically diverse sites and direct users to the closest or least loaded
site. Replication has become an essential feature in storage systems and is exten-
sively leveraged in cloud environments [5-7]. It is the main reason behind several
features such as fast accesses, enhanced performance, and high availability.
For fast access , user requests can be directed to the closest datacenter to
avoid communications' delays and thus insure fast response time and low
latency.
* http://www.animoto.com.
Search WWH ::




Custom Search