Graphics Reference
In-Depth Information
is represented by a vector of features: each feature represents
the normalized rotation of one of the body joints around one of
the three axes (e.g., rotation of the left/right shoulder/elbow
around the x / y / z axis). Then models for automatic recognition
of four emotional states (frustrated, triumphant, concentrating,
and defeated) are defined by providing 103 postures represented
by their vectors of low-level features and the corresponding
label.
In Bianchi-Berthouze (2012), the body movements of video game
players are analyzed both by observers and from motion capture
data to understand their role, e.g., movements that are functional to
the game vs. movements that express affect. The motion capture data
consists of the players' body joints rotation and the amount of body
movements is computed by a normalized sum of all of the joints over
a gaming session.
2.2 Expressive gesture quality synthesis
Two approaches for expressive gesture generation are widely used:
animation based on motion capture data and procedural animation.
First of all, expressive movement can be re-synthesized from motion
capture data. An example of such an approach is proposed by Tsuruta
et al. (2010) to generate emotional dance motions. In this work the
authors parameterize “standard” captured motions by modifying
the original speed of motion or altering its joint angles. Emotional
dance motions are parameterized by a small number of parameters
obtained empirically. Five emotional attitudes are considered: neutral,
passionate, cheerful, calm, and dark. The parameters influence directly
the joint values of a very simple body model consisting of 6 degrees
of freedom (DOF), namely knees, waists, and elbows.
Several models were proposed for procedural animation. In Allbeck
and Badler (2003), the choice of nonverbal behavior and the movement
quality depends on the agent's personality and emotional state. The
way in which the agent performs its movements is influenced by a
set of high-level parameters derived from Laban Movement Analysis
(Laban and Lawrence, 1947), and implemented in the Expressive
Motion Engine, EMOTE (Chi et al., 2000). The authors use two of
the four categories of the Laban's annotation scheme: Effort and
Shape. Effort corresponds to the dynamics of the movement and it
is defined by four parameters: space (relation to the surrounding
space: direct/indirect), weight (impact of movement: strong/light),
time (urgency of movement: sudden/sustained), and flow (control
of movement: bound/free). The Shape component describes body
Search WWH ::




Custom Search