Graphics Reference
In-Depth Information
algorithms that produce the desired results. However, the transitions
between stages are bi-directional in their natures. One stage delivers
the input to the next stage and also accepts the feedback data running
back to it. The processing within each stage and its internal structure
is largely treated as a 'black box', left to be further researched. The
interface between stages (a) and (b), the intent planning and behavior
planning stages, describes the communicative and expressive intent
of the communicative event with reference to the physical behavior.
The intent planning stage basically describes the function of verbal
and coverbal behavior within a communicative event. It, therefore,
provides a semantic description that accounts for those aspects that
are relevant and may influence the planning of verbal and non-verbal
behavior. The FML (Function Mark-up Language) (Heylen et al., 2008)
is used to specify the semantic data. The FML description defines the
basic semantic units associated with a communicative event and allows
the annotation of these units with properties that further describe
the communicative function of multimodal behavior (e.g. expressive,
affective, discursive, epistemic, or pragmatic). The interface between
stages (b) and (c), the behavior planning and behavior realization
stages, describes the physical features of multimodal behaviors as
to be realized by the final stage of the SAIBA behavioral model. The
BML (Behavior Mark-up Language) (Vilhjalmsson et al., 2007) suggests
language should be used for such meditation. Most of the existing
behavior realization (animation) engines are capable of realizing every
aspect of behavior (verbal, gestural, phonological, etc.) the behavior
planner may specify.
Current research into coverbal expression-synthesis, incorporating
ECAs, mostly agrees with the SAIBA architecture, providing
independent systems for: content-planning system (what to display),
behavior-planning system (how to display), and behavior-realization
system (physical/virtual generation of the artificial behavior). Speech
and gesture production are, in general, addressed as two independent
processes, and synchronized prior to execution. For instance, the
Articulated Communicator Engine (ACE) (Kopp and Wachsmuth, 2004)
is a behavior-realization engine that allows for the modeling of virtual
animated agents using the MURML gesture description language
(Kransted et al., 2002). ACE is independent of a graphics platform
and can synthesize multimodal utterances containing prosodic speech
synchronized with body and hand gestures, or facial expressions. Its
smart scheduling techniques, blending/co-articulation of gesture and
speech, are enabled by a connection between behavioral realization,
planning, and behavioral execution. ACE also allows a user to define
synchronization points between channels, but automatically handles
Search WWH ::




Custom Search