Graphics Reference
In-Depth Information
References
1. Picard RW, Vyzas E, Healey J (2001) Toward machine emotional intelligence: analysis of
affective physiological state. IEEE Trans Pattern Anal Mach Intell 23:1175
-
1191
2. Nicholson J, Takahashi K, Nakatsu R (2000) Emotion recognition in speech using neural
networks. Neural Comput Appl 9:290
-
296
3. Ververidis D, Kotropoulos C (2005) Emotional speech classi
!
cation using gaussian mixture
models and the sequential
fl
floating forward selection algorithm. In: IEEE international
conference on multimedia and expo, pp 1500
-
1503
4. Pierre-Yves O (2003) The production and recognition of emotions in speech: features and
algorithms. Int J Hum Comput Stud 59:157
-
183
5. Kim EH, Hyun KH, Kim SH, Kwak YK (2009) Improved emotion recognition with a novel
speaker-independent feature. In: IEEE/ASME transactions on mechatronics, p 14
6. Shan MK, Kuo FF, Chiang MF, Lee SY (2009) Emotion-based music recommendation by
af
!
nity discovery from
!
film music. Expert Syst Appl 36:7666
-
7674
7. Hemenover SH (2003) Individual differences in rate of affect change: studies in affective
chronometry. J Pers Soc Psychol 85:121
-
131
8. Eaton LG, Funder DC (2001) Emotional experience in daily life: valence, variability, and rate
of change. Emotion 1:413
-
421
9. Marsella SC, Gratch J (2009) EMA: a process model of appraisal dynamics. Cogn Syst Res
10:70
-
90
10. Ye J, Li Y, Wei L, Tang Y, Wang J (2009) The race effect on the emotion-induced gamma
oscillation in The EEG. In: 2nd international conference on biomedical engineering and
informatics, pp 1
-
4
11. Lee CM, Narayanan SS (2005) Toward detecting emotions in spoken dialogs. IEEE Trans
Speech Audio Process 13:293
-
303
12. Morrison D, De Silva LC (2007) Voting ensembles for spoken affect classi
!
cation. J Netw
Comput Appl 30:1356
-
1365
13. Schuller B, Reiter S, Muller R, Al-Hames M, Lang M, Rigoll G (2005) Speaker independent
speech emotion recognition by ensemble classi
!
cation. In: IEEE international conference on
multimedia and expo
14. Schuller B, Rigoll G (2006) Timing levels in segment-based speech emotion recognition. In:
Proceedings of INTERSPEECH
15. Yeh JH, Pao TL, Lin CY, Tsai YW, Chen YT (2011) Segment-based emotion recognition
from continuous Mandarin Chinese speech. Comput Hum Behav 27:1545
-
1552
16. Vogt T, Andre E (2005) Comparing feature sets for acted and spontaneous speech in view of
automatic emotion recognition. In: IEEE international conference on multimedia and expo
17. Shuzo M, Yamamoto T, Shimura M, Monma F, Mitsuyoshi S, Yamada I (2011) Construction
of natural voice database for analysis of emotion and feeling. J Inf Process 53:1185
-
1194
18. Steuer R, Kurths J, Daub CO, Weise J, Selbig J (2002) The mutual information: detecting and
evaluating dependencies between variables. Bioinformatics 18(Suppl 2):S231
-
S240
19. Specht DF (1990) Probabilistic neural networks. Neural Netw 3:109
-
118
20. Morrison D, Wang R, De Silva LC (2007) Ensemble methods for spoken emotion recognition
in call-centres. Speech Commun 49:98
-
112
21. Lang PJ, Bradley MM, Cuthbert BN (2008) International affective picture system (IAPS):
affective ratings of pictures and instruction manual. University of Florida, Gainesville