Graphics Reference
In-Depth Information
different emotional labelling of the melody for males and females. In
particular for the Angry(T)Happy(M) stimulus, the label attribution
was significantly different both for males ( F 2,80 = 3.363, p = .04), and
females ( F 2,80 = 8.673, p = .001), with a male preference for the “HAPPY”
label and a female preference for the “Other” label. Male subjects
significantly differed from females in attributing the label “SAD”
to the Angry(T)Sad(M) ( F 2,80 = 4.186, p = .02) stimulus and the label
“HAPPY” to the Sad(T)Happy(M) ( F 2,80 = 7.227, p = .001) stimulus.
In contrast to the data illustrated in Figure 1, where statistically
significant effects of context on the melody assessment are observed
only for happiness, in this new context, neither the text, nor the melody
predominates on the subject's assessment. For some pairs, the melody
assessment was affected by the written text (see the paired stimuli 6
and 9), for others it was the opposite (see the paired stimuli 5 and 8).
Subjects are more oriented to the melody label (see the paired
stimuli 4, 5, 7, 8 in Table 1) than to the text label (see the paired stimuli
6, 9 in Table 1) but the percentage of correct label attributions is always
significantly lower than that attributed to the single text or to the
single melody (see Table 1). Comparison between the two different
experiments shows greater effects of written texts than visual images
in affecting the emotional decoding process. However, the significance
of these results for understanding multimodal integration in human
communication remains unclear. It is possible to speculate that the
results are affected by the subject's task that was changed from a
dimensional (the valence) to a basic emotional label approach (or to
a basic emotion assumption). In our view, the two approaches involve
different evaluation processes. The dimensional is more immediate and
instinctive than the basic emotion approach that requires a cognitive
assessment of the linguistic labels. More experimental data are indeed
needed to interpret these differences.
Similar effects in different contexts have been described by other
authors (Bargh et al., 1996; Russell, 2003; Esposito, 2009), suggesting
that contextual interactions at organizational, cultural and physical
levels play a critical role in shaping individuals' social conduct,
providing a means in the ability to render the world sensible and
interpretable during everyday activities. New cognitive models
must account for embodied knowledge acquisition and use. To
this respect, the current literature has coined the term “embodied
cognition” usually employed in a multiplicity of domains ranging
from psychology, human-computer interaction, affective computing,
sociology, neuroscience, up to cognitive anthropology. Embodied
cognition theories explain cognition and consequently perception
and action either from a radical position that has made them directly
Search WWH ::




Custom Search