Graphics Reference
In-Depth Information
humans (and also non-human animals) can interact in a neutral
interactional setting, is rather common and has been tacitly accepted
to the point that facial emotional expressions have not been accounted
into the set of “facial displays”. According to Ekman (1989), “ affective
facial displays ” are universal and must be considered independent of
language (and then of communication?). Therefore, it seems that facial
emotional expressions have nothing to do with “communication” and
that in general emotions cannot be called in, within this context.
In the following, I will report data on the human ability to decode
visual (generally facial and gestural) and vocal emotional information.
5.1 Weighing the verbal and
nonverbal emotional information
Most of the current human psychology literature (with some very recent
exceptions) on the recognition of emotional facial expressions refers
to static photos of actors/actresses portraying the requested emotional
face. As pointed out in Esposito (2007), these are static images, not
embedded in a context, and not showing the dynamic features of
facial muscle changes that characterize human-human interaction. To
this aim, they can be considered qualitative targets that capture the
apex of the expression, i.e. the instant at which the indicators of the
selected emotion are most marked. Humans are good in attributing
emotional labels to such facial expressions and in this task they perform
much better than in the one where they are requested to assess vocal
emotional expressions (Esposito, 2009). However, in interaction, facial
emotional expressions are intrinsically dynamic and vary over time.
Is dynamic visual information emotionally richer than auditory
information? To answer this question, it is worth reporting the results
of a series of experiments based on emotional video-clips extracted
from Italian and American English movies (Esposito, 2007, 2009).
These experiments were aimed at evaluating the dynamic perception
of six emotional states (happiness, sarcasm/irony, fear, anger, surprise
and sadness) through the audio alone, mute video and audio/video
combined. For each language (Italian and American English), 10 stimuli
were collected for each emotional state (expressed by fi ve different
actors and fi ve different actresses) for a total of 180 Italian stimuli (60
audio, 60 mute video, and 60 audio/video) and 180 American English
stimuli. One hundred and eighty American and 180 Italian subjects, and
each group was separated into six further groups of 30 subjects each
tested on Italian audio alone, Italian video alone and Italian combined
audio/video as well as on American English audio alone, American
English video, and American English combined audio and video.
Search WWH ::




Custom Search