Digital Signal Processing Reference
In-Depth Information
cues: anatomic and cognitive correlates in neurodegenerative disease. NeuroImage 47 (4),
2005-2015 (2009)
158. Tepperman, J., Traum, D., Narayanan, S.: “Yeah Right”: sarcasm recognition for spoken
dialogue systems. In: Proceedings of the Interspeech, pp. 1838-1841. Pittsburgh, Pennsylvania
(2006)
159. Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods:
audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal Mach. Intell. 31 (1),
39-58 (2009)
160. Batliner, A., Steidl, S., Schuller, B., Seppi, D., Laskowski, K., Vogt, T., Devillers, L., Vidrascu,
L., Amir, N., Kessous, L., Aharonson, V.: Combining efforts for improving automatic classifi-
cation of emotional user states. In: Proceedings 5th Slovenian and 1st International Language
Technologies Conference, ISLTC 2006, pp. 240-245. Ljubljana, Slovenia, October 2006.
Slovenian Language Technologies Society (2006)
161. Schuller, B., Vlasenko, B., Eyben, F., Wöllmer, M., Stuhlsatz, A., Wendemuth, A., Rigoll,
G.: Cross-corpus acoustic emotion recognition: Variances and strategies. IEEE Trans. Affect.
Comput. 1 (2), 119-131 (2010)
162. Schuller, B., Vlasenko, B., Eyben, F., Rigoll, G., Wendemuth, A.: Acoustic emotion recogni-
tion: a benchmark comparison of performances. In: Proceedings 11th Biannual IEEE Auto-
matic Speech Recognition and Understanding Workshop, ASRU 2009, pp. 552-557. Merano,
Italy, IEEE, IEEE (2009)
163. Stuhlsatz, A., Meyer, C., Eyben, F., Zielke, T., Meier, G., Schuller, B.: Deep neural net-
works for acoustic emotion recognition: raising the benchmarks. In: Proceedings 36th IEEE
International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, pp.
5688-5691, Prague, Czech Republic, IEEE, IEEE (2011)
164. Ververidis, D., Kotropoulos, C.: A state of the art review on emotional speech databases. In:
1st Richmedia Conference, pp. 109-119. Lausanne, Switzerland (2003)
165. Grimm, M., Kroschel, K., Narayanan, S.: The Vera am Mittag German audio-visual emotional
speech database. In: Proceedings of the IEEE International Conference on Multimedia and
Expo (ICME), pp. 865-868. Hannover, Germany (2008)
166. Steidl, S.: Automatic Classification of Emotion-Related User States in Spontaneous Speech.
Logos, Berlin (2009)
167. Batliner, A., Seppi, D., Steidl, S., Schuller, B.: Segmenting into adequate units for automatic
recognition of emotion-related episodes: a speech-based approach. Adv. Human Comput.
Interact. Special Issue on Emotion-Aware Natural Interaction 2010 (Article ID 782802), 15
(2010)
168. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor,
J.: Emotion recognition in human-computer interaction. IEEE Signal Process. Mag. 18 (1),
32-80 (2001)
169. Eyben, F., Wöllmer, M., Schuller, B.: Openear—introducing the munich open-source emo-
tion and affect recognition toolkit. In: Proceedings 3rd International Conference on Affective
Computing and Intelligent Interaction and Workshops, ACII 2009, vol. I, pp. 576-581, Ams-
terdam, The Netherlands, HUMAINE Association, IEEE (2009)
170. Ishi, C., Ishiguro. H., Hagita, N.. Using prosodic and voice quality features for paralinguistic
information extraction. In: Proceedings of Speech Prosody 2006, pp. 883-886, Dresden (2006)
171. Müller, C.: Classifying speakers according to age and gender. In: Müller, C. (ed.) Speaker Clas-
sification II, vol. 4343. Lecture Notes in Computer Science/Artificial Intelligence. Springer,
Heidelberg (2007)
172. Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., Moore, G., Odell, J., Olla-
son, D., Povey, D., Valtchev, V., Woodland, P.: The HTK Book (v3.4). Cambridge University
Press, Cambridge (2006)
173. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Synthetic minority over-sampling
technique. J. Artif. Intell. Res. 16 , 321-357 (2002)
Search WWH ::




Custom Search