Digital Signal Processing Reference
In-Depth Information
84. Wöllmer, M., Klebert, N., Schuller, B.: Switching linear dynamic models for recognition
of emotionally colored and noisy speech. In: Proceedings 9th ITG Conference on Speech
Communication, ITG-Fachbericht, vol. 225. Bochum, Germany, ITG, VDE-Verlag (2010)
85. Romanyshyn, N.: Paralinguistic maintenance of verbal communicative interaction in literary
discourse (on the material of W. S. Maugham's novel "Theatre"). In: Experience of Designing
and Application of CAD Systems in Microelectronics—Proceedings of the 10th International
Conference, CADSM 2009, pp. 550-552. Polyana-Svalyava, Ukraine (2009)
86. Kennedy, L., Ellis, D.: Pitch-based emphasis detection for characterization of meeting record-
ings. In: Proceedings of the ASRU, pp. 243-248. Virgin Islands (2003)
87. Laskowski, K.: Contrasting emotion-bearing laughter types in multiparticipant vocal activity
detection for meetings. In: Proceedings of the ICASSP, pp. 4765-4768. Taipei, Taiwan, IEEE
(2009)
88. Massida, Z., Belin, P., James, C., Rouger, J., Fraysse, B., Barone, P., Deguine, O.: Voice
discrimination in cochlear-implanted deaf subjects. Hear. Res. 275 (1-2), 120-129 (2011)
89. Demouy, J., Plaza, M., Xavier, J., Ringeval, F., Chetouani, M. Prisse, D., Chauvin, D., Viaux,
S., Golse, B., Cohen, D., Robel, L.: Differential language markers of pathology in autism,
pervasive developmental disorder not otherwise specified and specific language impairment.
Res. Autism Spectr. Disord. 5 (4), 1402-1412 (2011)
90. Mower, E., Black, M., Flores, E., Williams, M., Narayanan, S.: Design of an emotionally
targeted interactive agent for children with autism. In: Proceedings of the IEEE International
Conference on Multimedia and Expo (ICME 2011), pp. 1-6. Barcelona, Spain (2011)
91. de Sevin, E., Bevacqua, E., Pammi, S., Pelachaud, C., Schröder, M., Schuller, B.: A multi-
modal listener behaviour driven by audio input. In: Proceedings International Workshop on
Interacting with ECAs as Virtual Characters, satellite of AAMAS 2010, p. 4. Toronto, Canada,
ACM, ACM (2010)
92. Biever, C.: You have three happy messages. New Sci. 185 (2481), 21 (2005)
93. Martinez, C.A., Cruz, A.: Emotion recognition in non-structured utterances for human-robot
interaction. In: IEEE International Workshop on Robot and Human Interactive, Communica-
tion, pp. 19-23 (2005)
94. Batliner, A., Steidl, S., Nöth, E.: Associating children's non-verbal and verbal behaviour:
body movements, emotions, and laughter in a human-robot interaction. In: Proceedings of
ICASSP, pp. 5828-5831. Prague (2011)
95. Delaborde, A., Devillers, L.: Use of non-verbal speech cues in social interaction between
human and robot: emotional and interactional markers. In: AFFINE'10—Proceedings of the
3rd ACM Workshop on Affective Interaction in Natural Environments, Co-located with ACM
Multimedia 2010, pp. 75-80. Florence, Italy (2010)
96. Schröder, M., Cowie, R., Heylen, D., Pantic, M., Pelachaud, C., Schuller, B.: Towards respon-
sive sensitive artificial listeners. In: Proceedings 4th International Workshop on Human-
Computer Conversation, p. 6. Bellagio, Italy (2008)
97. Burkhardt, F., van Ballegooy, M., Englert, R., Huber, R.: An emotion-aware voice portal. In:
Proceedings of the Electronic Speech Signal Processing ESSP, pp. 123-131 (2005)
98. Mishne, G., Carmel, D., Hoory, R., Roytman, A., Soffer, A.: Automatic analysis of call-center
conversations. In: Proceedings of the CIKM'05, pp. 453-459. Bremen, Germany (2005)
99. Belin, P., Fillion-Bilodeau, S., Gosselin, F.: The montreal affective voices: a validated set of
nonverbal affect bursts for research on auditory affective processing. Behav. Res. Meth. 40 (2),
531-539 (2008)
100. Schoentgen, J.: Vocal cues of disordered voices: an overview. Acta Acustica United Acustica
92 (5), 667-680 (2006)
101. Rektorova, I., Barrett, J., Mikl, M., Rektor, I., Paus, T.: Functional abnormalities in the primary
orofacial sensorimotor cortex during speech in parkinson's disease. Mov. Disord 22 (14),
2043-2051 (2007)
102. Sapir, S., Ramig, L.O., Spielman, J.L., Fox, C.: Formant centralization ratio: a proposal for a
new acoustic measure of dysarthric speech. J. Speech Lang. Hear. Res. 53 (2009)
Search WWH ::




Custom Search