Information Technology Reference
In-Depth Information
the Synthesis Enabler performs automated behavioural matching and map-
ping of the two models. This uses the ontology-based semantics of actions
to say where two sequences of actions in the two behaviours are seman-
tically equivalent; based upon this, the matching and mapping algorithms
determine a LTS model that represents the mediator. In few words, for both
affordance protocols, the mediator LTS defines the sequences of actions that
serve to translate actions from one protocol to the other, further including
the possible re-ordering of actions.
The Learning phase is a continuous process where the knowledge about NSs is
enriched over time, thereby implying that Emergent Middleware possibly needs
to adapt as the knowledge evolves. In particular, the synthesised Emergent Mid-
dleware is equipped with monitoring probes that gather information on actual
interaction between connected systems. This observed Monitoring Data is de-
livered to the Learning Enabler, where the learned hypotheses about the NSs'
behaviour are compared to the observed interactions. Whenever an observation is
made by the monitoring probes that is not contained in the learned behavioural
models, another iteration of learning is triggered, yielding refined behavioural
models. These models are then used to synthesise and deploy an evolved Emer-
gent Middleware.
3 Machine Learning: A Brief Taxonomy
Machine learning is the discipline that studies methods for automatically in-
ducing functions (or system of functions) from data. This broad definition of
course covers an endless variety of subproblems, ranging from the least-squares
linear regression methods typically taught at undergraduate level [20] to ad-
vanced structured output methods that learn to associate complex objects in
the input [18] with objects in the output [14] or methods that infer whole com-
putational structures [10]. To better understand the broad range of machine
learning, one must first understand the conceptual differences between learning
setups in terms of their prerequisites:
- Supervised learning is the most archetypical problem setting in machine
learning. In this setting, the learning mechanism is provided with a (typ-
ically finite) set of labelled examples: a set of pairs T = { ( x, y ) } . The goal is
to make use of the example set T to induce a function f , such that f ( x )= y ,
for future unseen instances of ( x, y ) pairs (see for example [20]). A major hur-
dle in applying supervised learning is the often enormous effort of labelling
the examples.
- Unsupervised learning lowers the entry hurdle for application by requiring
only unlabelled example sets, i.e., T =
. In order to be able to come up
with anything useful when no supervision is provided, the learning mech-
anism needs a bias that guides the learning process. The most well-known
example of unsupervised learning is probably k -means clustering, where the
learner learns to categorise objects into broad categories even though the
{x}
 
Search WWH ::




Custom Search