Digital Signal Processing Reference
In-Depth Information
2.4 Summary, Conclusion, and Future Work
Speech emotion profiling can be an effective tool for investigating intra- and
inter-cultural variations from various perspectives. It enables one to visualize the
interaction of the emotion that may give important information which is not
observable using normal signal analysis tools, namely, the speech recognition and
speaker identification. More work in understanding the profile especially in
extracting the relevant features as well as the appropriate data processing are
needed to benefit from such visualization tool. The speech emotion profile coupled
with a three-dimensional affective-space model may be able to provide a better
understanding of the dynamics of driver behavior. This work also illustrated that
there are strong correletions between driver behavior and emotion which can be
emperically measured using speech signals.
References
1. Bishop C (1997) Neural networks for pattern recognition. Clarendon, Oxford
2. Burkhardt F, Paeschke A, Rolfes M, Sendlmeier W, Weiss B (2005) A database of German
emotional speech. In: Proceedings of INTERSPEECH-ISCA, Lisbon, pp 1517-1520, 2005
3. Ganchev T, Fakotakis N, Kokkinakis G Comparative evaluation of various MFCC
implementations on the speaker verification task. In: Proceedings of the 10th international
conference on speech and computer (SPECOM 2005). Patras, vol 1, pp 191-194, 2005
4. Kamaruddin N, Wahab A (2008) Feature extraction for speech emotion. In: Proceedings of the
17th international conference on software engineering and data engineering (SEDE '08), Los
Angeles, pp 120-125
5. Kamaruddin N, Wahab A (2008) Speech emotion verification system (SEVS) based on MFCC
for real time applications. In: Proceedings of the 4th international conference on intelligent
environments (IE '08), Seattle, pp 1-7
6. Khalid M, Wahab A, Kamaruddin N (2008) Real time driving data collection and driver
verification using CMAC-MFCC. In: Proceedings of the 2008 international conference on
artificial intelligence (ICAI '08), Las Vegas, pp 219-224
7. Plutchik R (2003) Emotions and life: perspective from psychology, biology and evolution, 1st
edn. American Psychological Association, Washington, DC
8. Slaney M (1998) Auditory toolbox: Version 2. Technical Report #1998-010, Interval Research
Corporation
9. Scherer KR, Banse R, Wallbott HG (2001) Emotion inferences from vocal expression correlate
across languages and cultures. J Cross-Cultural Psychol 32(1):76-92
10. Schlosberg H (1954) Three dimensions of emotion. Psychol Rev 61:81-88
11. Cornelius, R. R. (1996). The Science of Emotion: Research and Tradition in the Psychology of
Emotion , Upper Saddle River, NJ: Prentice-Hall
12. Witten, I. H., Frank, E., Trigg, L., Hall, M., Holmes, G. & Cunningham, S. J. (1999) Weka:
Practical Machine Learning Tools and Techniques with Java Implementations. In: N. Kasabov
& K. Ko (Eds.). Proceedings of the ICONIP/ANZIIS/ANNES'99 International Workshop on
Emerging Knowledge in Engineering and Connectionist-Based Information Systems . Dunedin,
New Zealand, 192-196
13. Kamaruddin N. & Wahab A. (2009). Features Extraction for Speech Emotion. Journal of
Computational Methods in Science and Engineering (JCMSE), 9 (Supplement 1) , S1-S12, 2009
Search WWH ::




Custom Search