Game Development Reference
In-Depth Information
fields. A proof of this interest is how the new standard for the coding of hybrid
natural-synthetic media, MPEG-4, has given special importance to facial anima-
tion (Ostermann, 2002). The standard specifies common syntax to describe face
behavior, thus permitting interoperability amongst different face animation
systems. At this point of the evolution and deployment of applications compliant
with MPEG-4, several concerns have appeared: Has the standard given a global
solution that all specific face animation systems can adopt? Or, does the syntax
restrict the semantics of the possible achievable motion too much?
No matter the answer, the existence of all these doubts shows that there is still
a long way to go to master face animation and, more concretely, the automatic
generation of realistic human-like face motion. All the analysis techniques
covered in this chapter are of great help in the study of facial motion because
image analysis intrudes the least in the observed scenario, thus permitting the
study of real and completely natural behavior.
References
Ahlberg, J. (2002). An active model for facial feature tracking. Eurasip Journal
on Applied Signal Processing , 6, 566-571.
Andrés del Valle, A. C. & Dugelay, J. L. (2002). Facial expression analysis
robust to 3D head pose motion. Proceedings of the International
Conference on Multimedia and Expo .
Bartlett, M. S. (2001). Face image analysis by unsupervised learning .
Boston, MA: Kluwer Academic Publishers.
Bartlett et al. (2001). Automatic Analysis of spontaneous facial behavior: A
final project report . (Tech. Rep. No. 2001.08). San Diego, CA: Univer-
sity of California, San Diego, MPLab.
Black, M. J. & Yacoob, Y. (1997). Recognizing facial expressions in image
sequences using local parameterized models of image motion. I nterna-
tional Journal of Computer Vision , 25(1), 23-48.
Chen, L. S. & Huang, T. S. (2000). Emotional expressions in audiovisual human
computer interaction. Proceedings of the International Conference on
Multimedia and Expo .
Chou, J. C., Chang, Y. J. & Chen, Y. C. (2001). Facial feature point tracking and
expression analysis for virtual conferencing systems. Proceedings of the
International Conference on Multimedia and Expo .
Cordea, M. D., Petriu, E. M., Georganas, N. D., Petriu, D. C. & Whalen, T. E.
(2001). 3D head pose recovery for interactive virtual reality avatars.
Search WWH ::




Custom Search