Graphics Reference
In-Depth Information
Finally Directness Index , a measure of the movement straightness, is
associated with Fluidity .
Pugliese and Lehtonen (2011) propose the creation of an enactive
loop : a user and a virtual agent simulate the situation in which two
humans are in the same room but they are separated by a glass, so
they can communicate only by body movements. But, as it happens in
the above applications, the interaction is only at the expressive gesture
quality level. The proposed system allows defining a mapping between
the user's detected Quantity of Motion and Distance (the distance of
the person from the glass) onto the same agent's movement features.
However, such mapping is free, that is, the agent could simply imitate
the user or it could, for example, respond with opposite behaviors
(e.g., if the user moves quickly, the agent moves slowly and so on).
3. A Case Study: The EyesWeb XMI Platform
and the Greta ECA
In this section, we introduce a concrete example of two existing systems
for expressive gesture quality analysis and synthesis, implementing
some of the algorithms described in the previous sections. The
EyesWeb XMI platform is a modular system that allows both expert
(e.g., researchers in computer engineering) and non-expert users (e.g.,
artists) to create multimodal installations in a visual way (Camurri
et al., 2007). The platform provides modules, called blocks, that can
be assembled intuitively (i.e., by dragging, dropping, and connecting
them with the mouse) to create programs, called patches, that exploit
system's resources such as multimodal files, webcams, sound cards,
multiple displays and so on.
The Greta (Niewiadomski et al., 2011) is a virtual agent able to
communicate verbally as well as nonverbally various communicative
intentions. Concerning nonverbal communication it is able to display
facial expressions, gestures, torso and head movements. Greta is
controlled using two XML-like languages BML and FML-APML. It is
a part of several interactive multimodal systems working in real-time,
e.g., SEMAINE (Schröder et al., 2011) or AVLaughterCycle (Urbain et
al., 2010).
3.1 Expressive gesture quality analysis framework
In this section, we describe a framework for multi-user nonverbal
expressive gesture quality analysis. Its aim is to facilitate the
construction of computational models to analyze the nonverbal
behavior explaining the emotions expressed by the users.
Search WWH ::




Custom Search