Graphics Reference
In-Depth Information
University Talking Head [ 5 ] [ 6 ] , animates nonverbal signals in synchrony with speech and lip move-
ments. The intonation is specified using the tones and break indices (ToBI) standard [ 27 ]. Brow expres-
sions are categorized in terms of Ekman's facial action units [ 11 ]. The expressions are generated by
rules inferred from lexical structure along with observed behavior of live performances. Additionally,
in face-to-face communication the eyes alone can convey information about the participants, beliefs,
goals, and emotions [ 30 ] .
Upper body gestures (i.e., arm and torso) allow more opportunity for expressive speech. BEAT ,a
toolkit by Justine Cassell et al. [ 2 ] , allows animators to type in text they wish spoken by a synthetic
figure. Using linguistic and contextual information contained in the text, the movements of the hands,
arms, and face and intonation of the speech can be controlled. A knowledge base relating gestures to
emotions within the context of grammatical constructions is used to suggest gestural behaviors. These
are passed through an activity filter that produces a schedule of behaviors that is converted to word
timing and associated gestural animation. Laban movement analysis (LMA) (e.g., [ 19 ]) is used as a
theoretical framework for generating the qualitative aspects of linguistic gestures by Chi et al. in
the EMOTE system [ 4 ]. The shape and effort components of LMA are used to control the timing of
the gesture, the articulation of the arms during the gesture, and animation of the torso. As an alternative
to parameterized models of behavior, data-driven techniques use captured human motion to animate a
conversational character [ 3 ] [ 38 ] . Performance data are segmented and annotated in order to provide a
database of motion and speech samples. These samples are recombined and blended together to create
extended utterances.
11.4.3 Modeling individuality: personality and emotions
In the previous section, the ability to generate gestures along with speech makes a character more
believable. In the EMOTE system, the gestures can be modified by shape and effort parameters [ 4 ].
While in EMOTE these values are externally supplied, it demonstrates the ability to vary behavior
based on parameters. By making these parameters associated with the internal state of a character, indi-
viduals within a population can be differentiated by the quality of their behavior by their personality
[ 43 ]. For the current discussion, personality refers to the time-invariant traits that control the idiosyn-
cratic actions of an individual. Emotions are considered time-varying attributes of an individual. Mood
is sometimes considered a third attribute of individuality, and is a personal trait that is time varying but
on a longer timescale than emotion.
Graphics research into modeling personalities often borrows from research in psychology, where
there are already models proposed for characterizing personalities. These models include OCEAN and
PEN. The OCEAN model of personality, also called the Five-Factor Model, or the “Big 5,” consists of
openness, conscientiousness, extroversion, agreeableness, and neuroticism [ 32 ]. The PEN model of
Hans Eysenck has three dimensions of personality: extraversion, neuroticism, and psychoticism
[ 14 ]. The Eysenck Personality Profiler is widely used in business. It is a questionnaire that measures
21 personality traits along the three dimensions.
As with personality, graphics research in emotions has borrowed heavily from psychology. Psy-
chology, however, is not definitive when defining basic emotions and what those basic emotions
are. As identified by Ekman [ 10 ], there are six basic emotions: happy, sad, fear, disgust, surprise,
and anger. Emotional expressions and gestures contribute to communication (vis-´-vis linguists). Five
basic emotions are suggested by Oatley and Johnson-Laird [ 25 ] : happiness, anxiety, sadness, anger,
Search WWH ::




Custom Search