Databases Reference
In-Depth Information
Key question is: what kind of workload, in principle, should be done
in memory?
Let's start by looking at some scenarios where in-memory is not only preferred but
also necessary:
Your database is too slow for interactive analytics. Not all
databases are as fast as we would like them to be. This is especially
true for online transaction processing (OLTP) databases that are
meant to store transactional data. If you are working with a slow
database, then you may want to move your data in-memory,
so you can perform interactive, speed-of-thought analysis without
being constantly slowed down waiting for queries to return result
sets from disks.
You need to take load off a transactional database. Regardless
of the speed of your database, when its primary purpose is storing
and processing transactional data, you don't want to put additional
load on it. Analytical queries can put tremendous pressure on
transactional database and slow it down, negatively impacting
mission critical business operations. Bringing in a set of data to
an in-memory space increases the speed of analytics without
compromising the speed of critical operational business systems.
You require always-on analytics. You may need your analytic
application to be always available. Examples include logistics,
supply chain, fraud detection, and financial services applications.
Full-time availability for a single database can be risky, especially
if it doesn't have native failover capabilities. Instead of letting a
database become the single point of failure, a distributed data
cache provides a more reliable alternative. In this environment
when one node goes down, others immediately take over without
any interruption in service.
You need analysis of big data. For big data analysis, you may not
want to analyze the entire data set where it is stored. One example is
analyzing data stored in Hadoop, which while extremely powerful,
is subject to high-query latency, making it less-than-ideal for
real-time analytics. Instead you want to load a slice of your big data
set into memory for speed of thought analysis and visualization.
Discover patterns using data cached in memory, then, connect
directly to Hadoop for scheduled detail reports and dashboards.
Would You Still Need A Database?
As much as caching data in memory helps with many analytical scenarios, only having
in-memory architecture is limiting. You will still need your database.
 
Search WWH ::




Custom Search