Information Technology Reference
In-Depth Information
become the de facto standard and contains a sound mathematical foundation,
allowing for optimisation of the storage and retrieval processes. During the 1980s,
research on databases focused on distributed models, in the 1990s object-oriented
models and then in the 2000s on XML-based models. The distributed models were
necessary because of the evolution of the Internet and networking, which meant that
distributed sites of related information could now be linked up electronically. The
object-oriented models then arose with the invention of object-oriented program-
ming and the theory that object-based models are a preferable way to store and
manipulate information. Then with the emergence of XML as the new format for
storing and representing information, XML-based models also needed to be con-
sidered. While XML is now the de facto standard for describing text on the Internet,
meaning that most textual information will soon be stored in that format, it has not
replaced the relational model for speci
c modelling. Neither has the object-oriented
model. The increased complexity of these models can make them more dif
cult to
use in some cases, when the mathematical foundations of the relational model
remains appealing.
3.3 New Indexing Systems
The recent problems that
provides, linking up mobile or Internet of
Things with the Web, has meant that new database structures, or particularly, their
indexing systems, have had to be invented. Slightly more akin to Object-oriented
databases are new database versions such as NoSql and NewSql (Grolinger et al.
2013 ), or navigational databases. 2 As stated in Grolinger et al. ( 2013 ), the modern
Web, with the introduction of mobile and sensor devices has led to the proliferation
of huge amounts of data that can be stored and processed. While the relational
model is very good for structured information on a smaller scale, it cannot cope
with larger amounts of heterogeneous data. It is usually required to process full
tables to answer a query. As stated in Grolinger et al. ( 2013 ), CAP (Gilbert and
Lynch 2002 ) stands for
'
Big Data
'
'
consistence, availability and partition tolerance
'
and has
been developed along-side Cloud Computing and Big Data.
cally, the
challenges of RDBMS in handling Big Data and the use of distributed systems
techniques in the context of the CAP theorem led to the development of new classes
of data stores called NoSQL and NewSQL.
'
More speci
They note that the consistency in CAP
refers to having a single up-to-date instance of the data, whereas in RDBMs it
means that the whole database is consistent. NoSql now has different meanings and
might also be termed
'
. It can use different indexing systems that
might not even have an underlying schema. So it can be used to store different types
of data structure, probably more as objects than tables. The database aspect how-
ever can try to provide an ef
'
Not Only SQL
'
cient indexing system, to allow for consistent search
2
http://en.wikipedia.org/wiki/Navigational_database .
Search WWH ::




Custom Search