Graphics Reference
In-Depth Information
manifested by the same underlying process (e.g. different sides of
the same coin), the information is conveyed by each modality in a
different way. For instance, two gestures produced together do not
necessarily represent a gesture expressing complex meaning. Gestures
are also not completely of a linguistic nature. Several gestures may
represent a similar meaning, whereas similar gestures may represent a
totally different meaning. There are also no grammatical rules on the
movement structure by which a gesture is propagated. The language,
on the other hand, has grammar and order.
2. Related Work
Research into human-like ECAs' communicative behavior represents
a very hot topic. In general, the theories and concepts rely on the
understanding of communicative behavior (Allwood et al., 2007a;
McNeill, 2005). These concepts are then transformed into specifications
of ECAs' embodied (virtual) movement represented as coverbal
gestures. The growth point theory (McNeill, 1992) suggests the
representation of 'verbal' thinking in the form of idea units. The
interaction between two active models of the thinking, linguistic and
imagistic, triggers the language forms and manifestation of the coverbal
gesture. These gestures are generated from spatio-motoric processes
that interact on-line with the speech production process. In contrast
to the imagistic knowledge, the featural model (Krauss et al., 2000; de
Melo and Paiva, 2007) is based on propositional and non-propositional
features. The behavior models deduced from behavior theories are
mostly based on a motor planner, and lexical retrieval. Additional
language-dependent models further suggest that the produced gestures
are influenced by the speaker's language (Kita and Özyürek, 2003).
The SAIBA framework (Kopp et al., 2006; Vilhjalmsson et al., 2007)
is a framework for multimodal behavior generation and re-creation
by using ECAs. The framework suggests the usage of knowledge
structures that describe the form of communicative behavior, and the
life-span of synthetic behavior at different levels of abstraction. The
three levels of abstraction represent interfaces between those processes
used for: (a) behavioral planning (e.g. planning of a communicative
event), (b) multimodal realization planning (e.g. describing the
multimodal channels used for the realization of communicative
events), and (c) realization of planned behaviors on an ECA. The
concept of the three-layered SAIBA behavioral model represents a
structure that can be adopted by any multimodal behavior generation
system. The three processing stages are modular and relatively
independent. Each stage may introduce a wide-specter of different
Search WWH ::




Custom Search