Databases Reference
In-Depth Information
Implementing a data archiving policy wherein old data is
periodically moved into lower cost storage solutions. While this
is a sensible approach, it also puts constraints on data usage and
analysis needs: less data is made available to your analysts and
business users for analysis at any one time. This may result in less
comprehensive analysis of user patterns and can greatly impact
analytic conclusions
Upgrading network infrastructure leads to both increased costs
and, potentially, more complex network configurations.
From the above-mentioned arguments, it is clear that throwing money at your
database problem doesn't really solve the issue. Are there any alternative approaches?
Before we dive deep into alternative solutions and architectural strategies, let us first
understand how databases have evolved over the past decade or so.
The Database Evolution
It is a widely acceptable fact that innovation in database technologies began with the
appearance of the relational database management system (RDBMS) and its associated
database access mechanism through structured query language (SQL). The RDBMS was
primarily designed to handle both online transaction processing (OLTP) workloads and
business intelligence (BI) workloads. In addition, a plethora of products and add-on
utilities got developed in quick time augmenting the RDBMS capabilities thus developing
a rich ecosystem of software products that depended upon its SQL interface and fulfilled
many business needs.
Database engineering was primarily built to access data held on spinning disks.
The data access operations utilized systems memory to cache data and were largely
dependent on the CPU power available. Over time, innovations in efficient usage of
memory and faster CPU cycle speeds significantly improved data access and usage
patterns. Databases also began to explore options around parallel processing of
workloads. During the early days, the typical RDBMS installation was a large symmetric
multiprocessing (SMP) server, later these individual servers were clustered with
interconnects between two or more servers, thus appearing as a single logical database
server. This cluster based architectures significantly improved parallelism, and provided
high performance and failover capabilities.
Improvements in the hardware components like memory capacity and network
speeds were gradual and continue to evolve. In particular, in-memory technology had
enabled possibilities of retaining small but frequently accessed datasets in memory.
Network speeds also improved to a great extent, making it feasible to assemble much
larger clusters of servers known as grid computing to further optimize and efficiently
distribute workloads. These improvements in the hardware components triggered
creation of another type of RDBMS offering known as column-store databases. Sybase ,
now an SAP company, was the first to come out with an enterprise standard database
platform, Sybase IQ database.
 
Search WWH ::




Custom Search