Database Reference
In-Depth Information
that of only the snapshot synchronization time. Clearly, there is a tradeoff between
the time to snapshot the database, the size of the transactional log and the amount
of update transactions in the workload. In our framework this trade-off can be con-
trolled by application-defined parameters. This tradeoff can be further optimized by
applying recently proposed live database migration techniques [9,24].
11.5 PERFORMANCE EVALUATION OF DATABASE REPLICATION
ON VIRTUALIZED CLOUD ENVIRONMENTS
The CAP theorem [4] shows that a shared-data system can only choose at most two out
of three properties: C onsistency (all records are the same in all replicas), A vailability
(all replicas can accept updates or inserts), and tolerance to P artitions (the system
still functions when distributed replicas cannot talk to each other). In practice, it is
highly important for cloud-based applications to be always available and accept update
requests of data and at the same time cannot block the updates even while they read
the same data for scalability reasons. Therefore, when data is replicated over a wide
area, this essentially leaves just consistency and availability for a system to choose
between. Thus, the C (consistency) part of CAP is typically compromised to yield rea-
sonable system availability [1]. Hence, most of the cloud data management overcome
the difficulties of distributed replication by relaxing the consistency guarantees of the
system. In particular, they implement various forms of weaker consistency models
(e.g., eventual consistency [22]) so that all replicas do not have to agree on the same
value of a data item at every moment of time. In particular, the eventual consistency
policy guarantees that if no new updates are made to the object, eventually all accesses
will return the last updated value. If no failures occur, the maximum size of the incon-
sistency window can be determined based on factors such as communication delays,
the load on the system and the number of replicas involved in the replication scheme.
In this section, we present an experimental evaluation for the performance char-
acteristics of the master-slave database replication strategy on virtualized database
server in cloud environments [25]. In particular, the main goals of the experiments
of this section are the following:
To investigate the scalability characteristics of the master-slave replication
strategy with an increasing workload and an increasing number of database
replicas in a virtualized cloud environment. In particular, we try to identify
what factors act as limits on achievable scale in such deployments.
To measure the average replication delay (window of data staleness) that
could exist with an increasing number of database replicas and different
configurations to the geographical locations of the slave databases.
11.5.1 e XPeriment D esign
The Cloudstone benchmark* has been designed as a performance measurement tool
for Web 2.0 applications. The benchmark mimics a Web 2.0 social events calendar
* http://radlab.cs.berkeley.edu/wiki/Projects/cloudstone.
Search WWH ::




Custom Search