Database Reference
In-Depth Information
that processes client requests for data hosted on the storage engine (a relational
database). Generator operation s are performed on a temporary private scratchpad,
resulting in a virtual private copy of the service state. Upon the completion of a gen-
erator operation , the proxy server sends the shadow operation to the concurrency
coordinator. The latter notifies the proxy server whether the operation is accepted
or rejected according to the RedBlue consistency. If accepted, the operation is then
delegated to the local data writer to be executed in the storage engine.
10.5.2 C onsistenCy r ationing
Data created and processed by the same application might be different, and so might
the consistency requirements on them. For instance, data processed by a web shop
service can be of different kinds. Data kinds may include customers profiles and
credit card information, product sold data, user preferences, etc. Not all these data
kinds have the same requirements in terms of consistency and availability. Moreover,
within the same category, data might exhibit dynamic and changing consistency
requirements. As an example, an auction system data might require lower levels of
consistency at the start of the auction than towards the end of it.
The consistency rationing model [47] allows designers to define consistency
requirements on data instead of transactions. It divides data into three categories:
A , B , and C . Category A data requires strong consistency guarantees. Therefore, all
transactions on this data are serializable. However, serializability requires proto-
cols and implementation techniques as well as coordination, which are expensive in
terms of monetary cost and performance. Data within C category is data for which
temporary inconsistency is acceptable. Subsequently, only weaker consistency guar-
antees, in the form of session consistency, are implemented for this category. This
comes at a cheaper cost per transaction and allows better availability. The B category
on the other hand presents data for which consistency requirements change in time as
in the case for many applications. These data endure adaptive consistency that switch
between serializability and session consistency at runtime whenever necessary. The
goal of the adaptive consistency strategies is to minimize the overall cost of the pro-
vided service in the cloud. The general policy is an adaptive consistency model that
relies on updates conflict probability. It observes the data access frequency to data
items to compute the probability of access conflicts. When this probability grows
over an adaptive threshold, serializability is selected. The computation of the adap-
tive threshold is based on the monetary cost of weak and strong consistency and the
expected cost of violating consistency.
Consistency rationing is implemented in a system that provides storage on top
of Amazon Simple Storage Service (S3) [2], which provides only eventual consis-
tency. Clients Requests are directed to application servers. These servers are hosts on
Amazon EC2 [3]. Therefore, application servers interact with the persistent storage
on Amazon S3 . To provide consistency guarantees, the update requests are buff-
ered in queues called pending updates queues that are implemented on the Amazon
Simple Queue Service (SQS) [50]. Session consistency is provided by always routing
requests from the same client to the same server within a session. In contrast, and to
provide serializability, a two-phase locking protocol is used.
Search WWH ::




Custom Search