Graphics Reference
In-Depth Information
Figure 1. The user 's expressive gesture quality is analyzed to infl uence the agent; in a
symmetrical way, the agent's expressive gesture quality infl uences the user's behavior.
(Color image of this fi gure appears in the color plate section at the end of the topic.)
features of the human behavior are mapped to the agent behaviors.
Consequently, the agent does not repeat the human movements but its
individual behavior is modified to fit the user 's expressive behavior
profile. The system was only partially implemented and it was not
working in a real-time. Thus, natural human-machine interaction was
not possible. Instead, the gesture expressivity parameters detected
with the gesture analysis module were used to control manually
a virtual agent that implements Hartmann's model of expressive
behavior (Hartmann et al., 2005; see also Sections 2.2 and 3). Caridakis
et al. (2007) proposed a mapping between the expressive features of
human behavior and the agent's expressivity parameters: the sum of
the variance of the norms of the motion vectors is associated to the
agent's Fluidity ; the first derivative of the motion vector to Power ;
the distance between hands to Spatial Extent ; the sum of the motion
vectors to Overall Activity.
A similar solution was proposed more recently by Mancini and
Castellano (2007). Differently from Cardakis et al.'s work, Mancini and
Castellano build a truly interactive system that takes as input the video
data, extracts high-level behavior features using the EyesWeb XMI
platform (Camurri et al., 2007), and finally synthesizes them with a
virtual agent. Similarly to Caridakis et al. (2007), the agent copies only
the expressive qualities of the movement of the human but realizes
different gestures. The video input is treated with EyesWeb XMI to
perform quantitative analysis of the human movement in real-time.
The virtual agent uses the expressive model proposed by Hartmann et
al. (2005) (see previous section). The following mapping between the
features detected by EyesWeb XMI and the agent's expressivity quality
of movement is then performed: Contraction Index is mapped to Spatial
Extent , Velocity of movement to Temporal Extent , Acceleration to Power .
Search WWH ::




Custom Search