Information Technology Reference
In-Depth Information
6.6.2 Physiological Measures and Emotion Recognition
This section will emphasize the signi
cance of physiological responses during music
listening to emotion recognition and its applications. Music emotion recognition is
of considerable importance for many research
fields including music retrieval, health
applications, and human
machine interfaces. Music collections are increasing
rapidly, and there is a need to intelligently classify and retrieve music based on
emotion. Training computers to recognize human emotional states, on the other
hand, is a key issue toward successful realization of advanced computer
-
human
interaction systems. The goal is to develop computational models that are able to link
a given physiological pattern to an emotional state.
Relatively little attention has been so far paid to physiological responses com-
pared to other modalities (audio
-
visual for example) for emotion recognition.
-
cant amount of work has been conducted showing that musical emo-
tions can be successfully recognized based on physiological measures such as heart
rate, respiration, skin conductance, and facial expressions. Picard et al. ( 2001 ) were
the
A signi
first who showed that certain affective states can be recognized by using
physiological signals including heart rate, respiration, skin conductivity, and muscle
activity. Nasoz et al. ( 2003 ) used movie clips to induce emotions in 29 subjects and
combining physiological measures and subjective components achieved 83 %
recognition accuracy. Wagner et al. ( 2005 ) recorded four biosignals from subjects
listening to music songs and reached a recognition accuracy of 92 %. Kim and
Andre ( 2008 ) used music excerpts to spontaneously induce emotions. Four bio-
sensors were used during the experiments to measure electromyogram, electro-
cardiogram, skin conductivity, and respiration changes. The best features were
extracted, and their effectiveness for emotion recognition was tested. A classi
ca-
tion accuracy of 70
-
cation respectively was achieved. Koelstra et al. ( 2011 ) used a multimodal approach
based on physiological signals for emotion recognition. They used music video
clips as stimuli. During the experiments EEG signals, peripheral physiological
signals and frontal video were recorded. A variety of features was extracted and
used for emotion recognition by using different fusion techniques. The results show
a modest increase in the recognition performance, indicating limited complemen-
tarity of the different modalities used. Recently, a combination of acoustic features
and physiological responses was used for emotion recognition during music lis-
tening (Trochidis et al. 2012 ). The reported results indicate that by merging acoustic
and physiological modalities substantially improves participant
90 % for subject-independent and subject-dependent classifi-
-
s ratings of felt
emotion recognition rate compared to the results using single modalities.
One of the main problems toward assessing musical emotions using physio-
logical measures is to extract features that are relevant. In the current state, most
studies try to extract features by simply removing non-relevant and keeping rele-
vant based on statistical measures. It seems that by equally weighting features of
different modalities does not lead to improved recognition accuracy. Alternative
approaches should be developed treating valence arousal separately. To combine
'
 
Search WWH ::




Custom Search