Game Development Reference
In-Depth Information
Figure 6. Face features (eyes, mouth, brows, …) are extracted from the
input image; then, after analyzing them, the parameters of their deformable
models are introduced into the NNs which finally generate the AUs
corresponding to the face expression. Image courtesy of The Robotics
Institute at Carnegie Mellon University.
Methods that Obtain Parameters Related to the Face
Animation Synthesis Used
Some face animation systems need action parameters as input that specify how
to open the mouth, the position of the eyelids, the orientation of the eyes, etc., in
terms of parameter magnitudes associated to physical displacements. The
analysis methods studied in this section try to measure displacements and feature
magnitudes over the images to derive the actions to be performed over the head
models. These methods do not evaluate the expression on the person's face, but
extract those measurements that will permit the synthesis of it on a model from
the image, as shown in Figure 7.
Terzopoulos and Waters (1993) developed one of the first solutions of this
nature. Their method tracks linear facial features to estimate corresponding
parameters of a three-dimensional, wireframe face model, allowing them to
reproduce facial expressions. A significant limitation of this system is that it
requires facial features to be highlighted with make-up for successful tracking.
Although active contour models are used, the system is still passive. The tracked
Search WWH ::




Custom Search