Database Reference
In-Depth Information
things and people. The actors involved in IoT scenarios will have extremely het-
erogeneous characteristics, in terms of processing and communication capabili-
ties, energy supply and consumption, availability, and mobility, spanning from
constrained devices, also denoted as “smart objects,” to smartphones and other
personal devices, Internet hosts, and the Cloud. Shared and interoperable com-
munication mechanisms and protocols are currently being defined and standard-
ized, allowing heterogeneous nodes to eciently communicate with each other
and with existing Internet actors. The most prominent driver for interoperabil-
ity in the IoT is the adoption of the Internet Protocol (IP), namely IPv6 [ 1 , 2 ].
An IP-based IoT will be able to extend and interoperate seamlessly with the
existing Internet. Standardization institutions, such as the Internet Engineer-
ing Task Force (IETF) [ 3 ], and several research projects [ 4 ] are in the process
of defining mechanisms to bring IP to smart objects, due to the need to adapt
higher-layer protocols to constrained environments. However, not all objects will
be supporting IP, as there will always be tiny devices that will be organized
in closed/proprietary networks and rely on very simple and application-specific
communication protocols. These networks will eventually connect to the Internet
through a gateway/border router. In this context, with billions of nodes capa-
ble of gathering data and generating information, Big Data techniques address
the need to process extremely large amounts of heterogeneous data for multiple
purposes. These techniques have been designed mainly to deal with huge vol-
umes (focusing on the data itself), rather than to provide real-time processing
and dispatching. Cloud computing has found a direct application with Big Data
analysis due to its scalability, robustness, and cost-effectiveness. Moreover, the
processing and storage functions implemented by remote Cloud-based collectors
are the enablers for their core business, which involve providing services based
on the collected and processed data to external consumers.
IoT applications provide useful services to final users as a consequence of the
processing work on the huge amount of data collected by smart objects. Moreover
several reference IoT scenarios, such as industrial automation, transportation,
networks of sensors and actuators, require real-time/predictable latency and
could even change their requirements (e.g., in terms of data sources) dynami-
cally and abruptly. This can be mistakenly considered only as a Big Data sce-
nario, but is important to note that Smart-X services significantly differ from
traditional Internet services, in terms of: (i) number of data sources; (ii) rate of
information exchange; and (iii) need for real-time processing. The requirements
listed above create a new need for Cloud architectures specifically designed to
handle this kind of scenario and to guarantee minimal processing latency. Such
systems are denoted as “Big Stream” systems. Big Data architectures generally
use traditional processing patterns with a pipeline approach [ 5 ]. These architec-
tures are based on a processing approach where the data flow goes downstream
from input to output, to perform specific tasks or reach the target goal. Typ-
ically, the information follows a pipeline where data are sequentially handled,
with a pre-defined processing tightly coupled sub-units (static data routing). The
described paradigm can be defined as “process-oriented:” a central coordination
Search WWH ::




Custom Search