Database Reference
In-Depth Information
and (d) the system reliability level has been increased as the availability of most
components is checked and controlled.
The obvious disadvantage of this architecture is the one concerning the han-
dling of updates. LD updates lead to applications that might not be well informed
for a particular amount of time. This does not constitute a major limitation for
the current applications supported but certainly limits the application of the
system in terms of reaching out real-time (emergency) applications. Another
limitation is that LMS is the only component in the system that constitutes a
single point of failure as when it fails, user requests cannot be processed any
more. However, the load on this component is minimal and the processing effort
is also quite minimal as it is actually moved to the respective Virtuoso instance.
In addition, another script runs in the instance hosting LMS which can take care
of rebooting it when it goes down. This means that it is not very probable that
the component can really reach its limits such that it can fail and when it fails,
it can be rebooted rapidly. Obviously, a fatal failure cannot always be neglected
so one small extension to the current system architecture would be to add an
Amazon LB on top of LMS, fixed to have just one instance, such that when it
permanently goes down a new instance hosting it can be created. Amazon LB is
not a single point of failure as it is replicated so in this sense there would not be
any single point of failure in the system any more. However, this will add some
small latency in handling user requests as well as an additional operating cost
to the system.
The advantages of the proposed architecture can have an effect only when the
system is properly configured. This means that the image updating and scaling
policies should be specified in such way that the system functions as desired
without raising any significant issue, such as very frequent image updating that
leads to system performance deterioration or circular scaling actions that raise
the system cost in terms of the resources used. The next section identifies the
experiments that have been performed to determine the correct content of these
policies as well as for evaluating the query/export performance of the proposed
system.
6 Evaluation and Implementation
6.1 Experiment Set-Up
Two main experiments were performed each having the goal to measure the
average query performance and cpu load in the course of time when a partic-
ular number of concurrent users is issuing a specific number of queries for a
standalone and a load-balancing based configuration. In the first experiment,
100 concurrent users were created and each was able to issue 50 requests, while
in the second experiment, the respective numbers were 50 and 50, respectively,
for the concurrent user and request numbers. In the standalone configuration,
depending on whether we desire to evaluate the old or the new LMS system, the
LMS either connects only to the Virtuoso engine of the master instance or to
the respective scaling layers which are related to the content of the queries to
Search WWH ::




Custom Search