Databases Reference
In-Depth Information
OLTP performance really is wrong depends on the user requirements of the OLTP
system and the actual response times of OLTP requests.
As an example, a company whose representatives rely on OLTP response times
because of being in direct customer contact, e.g., on the telephone, is considered.
In this situation, the possibility to speed up OLAP at the cost of OLTP performance
has to be judged carefully. As long as the increase of response times stays
imperceptible, e.g., is in the range of micro up to milliseconds, the potential given
by the threshold of human perception can be used to enhance OLAP run times. For
example, the increase of response time for the entire execution of an OLTP request
and its display to the representative is acceptable for the representatives as long
as it stays between half a second and at most a second after an optimization for
OLAP requests is employed. A huge benefit for the OLAP users is created if such
an optimization pushes response times of OLAP requests below the threshold of one
second. “Delays of longer than one second will seem intrusive on the continuity of
thought.” [ 22 ]. Nielsen [ 153 , Chap. 5] explains, that the threshold of one second is
the limit for a user's flow of thought to stay uninterrupted, although the users notices
the delay. Yet, at this threshold the OLAP users can start to work interactively
without the system itself being the reason that they become distracted.
As soon as the speed-up of OLAP causes side effects that diminish OLTP
throughput or violate response time requirements, the decision has to be reevaluated
keeping in mind the business consequences, for example, stalled business processes
or reduced customer satisfaction if responses are not received immediately. The raw
query performance of specialized database systems that are developed for either
OLTP or OLAP currently prevails compared to the approach of a combined database
system. At one point, however, the decision has to be made if a landscape with
dedicated systems will be set up or if an integrated one is desired.
CBTR provides a basis to assess emerging systems regarding their ability to cope
with given workloads. Which workload share is actually defined for a measurement
depends on the party responsible for the measurements. In CBTR, workload shares
can be flexibly configured and thus they can be adapted to the requirements of any
company interested in assessing a specific scenario. To further guide the decision-
making process, a number of factors have been discovered in the discussions
throughout this thesis and during the application of the benchmark in the evaluation
of database schema optimizations. These factors have to be weighed to reach the
right decision according to given requirements.
Having a single source of truth like in a hybrid system restricts the introduction of
optimizations within data structures because an optimization positively affects either
OLTP or OLAP performance. Thus, optimizations have to be carefully chosen, e.g.,
based on performance thresholds and workload shares as discussed above.
Data extraction and synchronization is a necessary step to update data in an
analytical system that is separate from the operational system. Data preparation as
used in current analytical systems allows the introduction of optimizations to speed
up reporting. Finally, a separate analytical system and its ETL creates redundant data
that has to be managed. Increased resource usage through keeping redundant data
sets and running the ETL tasks is the smallest portion of the costs this introduces.
Search WWH ::




Custom Search