Database Reference
In-Depth Information
upload their data on the web for open use of their information [4,11,30,31,40]. In this
chapter, we aim to integrate multiple source-based environmental knowledge, pro-
vide the knowledge recommendation and publish the knowledge as LOD to help the
environmental decision support the community.
15.2 WHAT IS BIG KNOWLEDGE?
In the scientific domains, contextual interpretation and understanding of Big Data
is the foundation of knowledge or big knowledge. Knowledge is the meaningful-
ness about the data [19]. Big knowledge representation is the next higher level,
after the Big Data representation, within any data-based predictive analytics sys-
tem. Knowledge representation, a subtopic of artificial intelligence (AI), is the way
knowledge is organized and processed, and to what kind of data structures an intel-
ligent system uses and what kind of reasoning can and cannot be done with the
knowledge. In both cases, it requires representations and data structures to repre-
sent knowledge. Any problem-solving task presupposes some sort of knowledge
representation. The interesting question in psychology is to find out what type or
types of knowledge representations the mind uses [16,59]. A mental lexicon can be
organized in the memory as a list or as a set. These two different structures permit
different operations. A list encodes the order of the words whereas a set, by defini-
tion, is ignorant of the order of the items. So, a list allows us to query the first word,
the second word, the third word, etc. This is not possible using a set. Use of a set or
a list depends on the psychological theory that is being modeled. If, according to it,
the order of words is important, then a list needs to be used, if not, a set is accept-
able [60]. If a set is enough, then it needs further precision of how to implement it.
Using a hash table would guarantee a constant search time. Knowledge representa-
tion has a vital role to play in AI-based recommendation of the big environmental
data because it determines to a very large extent what kind of reasoning can be done
with the knowledge, how fast it is, how much memory is consumed and how optimal
and complete the algorithms that utilize the knowledge are. According to World
Wide Web Consortium (W3C), the RDF was a standard model for machine read-
able data presentation [7,18,45,55,62]. It decomposed data into three pieces (subject,
object, and predicate) and assigned an URI for each resource or object (Reference).
By accessing the URIs, it was possible to read the information about the particular
resource on the web using the HTTP access. This made the integrated environ-
mental feature-based knowledge ready for flexible web integration. The RDF for-
mat provided features that facilitate data integration even if the underlying schema
differed, and it specially supported the evaluation of schemas over time without
requiring the entire data consumer to be changed [18,62].
15.2.1 m otivation anD w orkFlow
Large-scale environmental knowledge integration from complementary “Big Data”
sources using semantically guided machine learning approach was the main focus of
this chapter. The fundamental science challenge behind this work was to design and
implement dynamic data-processing architecture to process multiple environmental
Search WWH ::




Custom Search