Game Development Reference
In-Depth Information
to animate avatars. Nevertheless, many computer applications require real-time
and easy-to-use face animation parameter generation, which means that the first
solutions developed using motion capture equipment prove to be too tedious for
many practical purposes. Most applications utilizing Talking Heads aim at
telecommunication uses. In such a context, real-time capabilities and low
computing cost for both analysis and synthesis are required. Current trends in
research tend to use speech analysis or synthesized speech from text as a source
of real-time animation data. Although these techniques are strong enough to
generate parameters to be used by avatars, they cannot provide realistic data for
face animation.
To obtain realistic and natural 3D Face Animation (FA), we need to study and
understand the complete human face behavior and those image-based methods
that are cost-flexible techniques for face movement understanding. In this
chapter we present the latest and most effective systems to analyze face
expression over monocular images to generate facial animation to restitute
speaker-dependent face motion on 3D face models. Figure 1 represents the
basic flowchart for systems dedicated to facial expression and motion analysis
on monocular images. Video or still images are first analyzed to detect, control
and deduce the face location on the image and the environmental conditions
under which the analysis will be made (head pose, lighting conditions, face
occlusions, etc.). Then, some image motion and expression analysis algorithms
extract specific data, which is finally interpreted to generate face motion
synthesis.
Figure 1. Image input is analyzed in the search for the face general
characteristics: global motion, lighting, etc. At this point, some image
processing is performed to obtain useful data that can be interpreted
afterwards to obtain face animation synthesis.
Search WWH ::




Custom Search