Information Technology Reference
In-Depth Information
Table 3. Mandatory criteria for functional IOP evaluation
Principle Criteria
Shared information IOP provides the functionality required for information interoperability and service interoperability
Simplicity
The service interoperability level provides constructs for handling the building blocks of the information interoper-
ability level
Agnostics
IOP is agnostic to used software technologies including ontologies, programming languages service technologies,
and legacy.
Extensibility
Space is extensible; information is extensible and knowledge interpretations are extensible. IOP supports run-time
information mash-up. A qualitative evaluation method is used for extensibility evaluation.
Notification
A set of detection and notification mechanisms are provided for context sensing, activating specific functionality and
alarming for upcoming events, i.e. the reactive and proactive actions activated by changes, data or events should be
possible.
Security and Trust
IOP produces information with relevant indicators of its source and the quality of its source.
Evolvability
IOP provides an evolvable information sharing environment, i.e. devices and services can be changed without having
an effect on applications.
Context
IOP provides a mechanism for searching and adapting information which is relevant for the requestor's purposes, if
the information exists in SS and is available for the requestor.
evaluation and testing is the duty of the application
developers. However, the testing is to be made as
easy as possible if not automatic.
The evaluation criteria for IOP have been de-
rived from the IOP requirements. These criteria are
classified into two categories; i) criteria for func-
tional evaluation; and ii) criteria for design time
and run-time quality evaluation. The mandatory
functional criteria are listed in Table 3. Optional
functional criteria concern the IOP extensions,
which are not required in every IOP instance.
The quality criteria, defined in Table 4, con-
centrates on the capabilities that should be covered
by designs and implementations and should be
evaluated at development time and/or run time.
In smart spaces, these quality criteria are taken
into account in the designs of the interoperability
platform and the applications which are developed
on top of it. At design time, qualities can be
evaluated through simulation or by using the
quality attribute specific prediction methods as
described in (Ovaska et al. 2010). Simulation and
prediction are used only for some parts of smart
spaces, and not for evaluating the whole smart
space, since, due to the dynamics of an SS, it is
not possible to simulate all of the possible states.
Thus, the quality evaluation has two parts: i)
Simulation and prediction methods are applied to
a specific purpose and part of IOP, e.g. the per-
formance of SIB deployment, and ii) run-time
quality monitoring and visualization is used to
evaluate the fulfillment of the quality criteria at
run-time. As security, performance and depend-
ability are execution qualities, they are evaluated
from a running smart space. The metrics to be
used during the development time and at the
execution time are different, and therefore, vari-
ous measuring techniques are required. The cri-
teria for execution time evaluation are defined by
a set of ontologies, each of which focuses on one
specific quality attribute. So far, we have defined
a security metrics ontology. The development of
reliability and performance metrics ontologies
are under work. All of these require extensive
experimentation and validation before leveraging
them among the SSA developers.
Although a systematic evaluation of the defined
quality characteristics is still under development,
the following concurrent development activities
are ongoing: First, a design time evaluation
 
Search WWH ::




Custom Search