Database Reference
In-Depth Information
a higher level of consistency is larger). Network latency is high and varies in time
in Amazon EC2 (we observe it is 5 times higher than in a local cluster). We run
workload-A while varying the number of client threads.
Figure 10.4a presents the 99th percentile latency of read operations when the
number of client threads increases on EC2. The strong consistency approach pro-
vides the highest latency having all reads to wait for responses from all repli-
cas that are spread over different racks and clusters. Eventual consistency is the
approach that provides the smallest latencies since all read operations are per-
formed on one local replica (possibly at the cost of consistency violation). We can
clearly see that Harmony with both settings provides almost the same latency as
a basic static eventual consistency. Moreover, the latency increases by decreasing
the tolerable stale reads rate of an application as the probability of stale read can
easily get higher than these rates, which requires higher consistency levels and, as
a result, a higher latency.
In Figure 10.4b, we show the overall throughput for read and write operations
with different numbers of client threads. The throughput increases as the number
of threads increases. However, the throughput decreases with more than 90 threads.
This is because the number of client threads is higher than the number of storage
hosts and threads are served concurrently. We can observe that the throughput is
smaller with strong consistency. This is because of the extra network traffic gener-
ated by the synchronization process as well as the high operation latencies. We can
notice that our approach with a stale reads rate of 60%, provides very good through-
put that can be compared with the one of static eventual consistency approach. While
exhibiting high throughput, our adaptive policies provide fewer stale reads as higher
consistency levels are chosen only when it matters.
10.6.4.2 Staleness
In Figure 10.4c, we show that Harmony, with both policies with different application
tolerated stale reads rates, provides less stale reads than the eventual consistency
approach. Moreover, we can see that, with a more restrictive tolerated stale reads
rate, we get a smaller number of stale reads. We observe that with rates of 40%, the
number of stale reads decreases when the number of threads grows over 40 threads.
This is explained by the fact that with more than 40 threads the estimated rate grows
higher than 40%, for most of the run time due to concurrent accesses, and higher
consistency levels are chosen, thus decreasing the number of stale reads. It is impor-
tant to note that that this number of stale reads is not the actual number of stale reads
in the system in the normal run, but it is representative.
In fact, to measure the number of stale reads, we perform two read operations for
every read operation in the workload. The first read is performed with the relevant
consistency level chosen by our approach, and the second read is performed with the
strongest consistency level. Then, we compare the returned timestamps from both
reads, and if they do not match, it means that the read is stale.
To see the impact of network latency on the stale reads estimation we ran work-
load-A—varying the number of threads starting with 90 threads, then, 70, 40, 15,
and finally, one thread-on Amazon EC2 and measure the network latency during
the runtime. Figure 10.4d shows that high network latency causes higher stale reads
Search WWH ::




Custom Search