Information Technology Reference
In-Depth Information
4.5.3.1.
Modeling of the output interaction
In what follows, we model the interface that enables a system to respond to
a request according to the scenario described above. We consider
I
the set of
information in output output information constituted of the singleton
i
= “
here you
can see the map of the city
” combined with the image of the map of the town of
Heidelberg,
UIE
the set of elementary information units constituted of elements
uie
1
= “
here you can see the map of the city
” and
uie
2
= image of the map of
Heidelberg. We also consider the sets:
MOD
=
{
speech
,
facial expression
,
image
}
MED
=
{
screen
,
loud-speaker
}
ITEM
=
{
(
speech
,
loud-speaker
)
,
(
facial expression
,
screen
)
,
(
image
,
screen
)
}
4.5.3.2.
Semantic fission
with
i
= “
here you can see the map of the city
” combined with the image
of the map of Heidelberg:
I
=
{i}
UIE
=
{uie
1
,uie
2
}
with:
uie
1
= “
here you can see the map of the city
”,
uie
2
= image of the map of
Heidelberg.
The fissioned information
i
is expressed by the parallel temporal combination:
i
=(
Pl
,Cp
)(
uie
1
,uie
2
)
Thus, we deduce that the interface was designed according to the type of synergistical
multimodality.
4.5.3.3.
Allocation
The
elementary
multimodal
presentations
pme
1
and
pme
2
respectively
corresponding to the elementary information units
uie
1
and
uie
2
are:
pme
1
=(
speech
,
loud-speaker
)(
uie
1
)
compl
(
facial expression
,
screen
)(
uie
1
)
The information
uie
1
is expressed by the speech modality on the
loud-speaker
completed by the facial expression on the screen (restitution of the dialog of the
conversational agent):
pme
2
=(
picture
,
screen
)(
uie
2
)