Graphics Reference
In-Depth Information
Figure 1. Percentage of agreement, among subjects, on valence values ( positive, negative, I
don't know ), in the four experimental conditions (happy music only, happy music played
with neutral, positive, and negative visual stimuli, respectively).
The resulting nine paired stimuli were then presented through
a Power Point presentation to 42 subjects equally balanced for
gender and aged between 19 and 31 years (mean = 23.98, SD = 3.49).
The subjects were asked to judge, while reading the sentences, the
emotional quality of the paired melody attributing to it one of the
following labels: happiness, sadness, anger, I don't know, another emotional
label .
Results show that subjects assign congruent labels to congruent
pairs, whereas they are significantly less likely to do so when the
Melody (M) does not match the Text (T) (Table 1).
As Table 1 illustrates, subjects' identification of the emotional
feeling associated with the musical pieces is significantly higher for
congruent paired stimuli with respect to the mismatched ones. The
congruent data in Table 1 fit the “identical” or “enhanced” behavior
suggested by the Partan and Merler model, since, when the paired
stimuli are congruent (same emotional text, same emotional melody),
the subject's response intensity is slightly enhanced (for happiness)
or slightly weakened (for sadness and anger). The slightly different
recipient's response from the expected “equivalent” or “enhanced”
behavior (in particular for the sad melody that is better recognized
alone (90% of correct identification) than when paired with the sad
text (78.6% of correct identification)) suggests that other variables may
Search WWH ::




Custom Search