Game Development Reference
In-Depth Information
Motion modeling of facial features
To extract motion information from specific features of the face (eyes, eye-
brows, lips, etc.), we must know the animation semantics of the FA system that
will synthesize the motion. Deformable models, such as snakes, deliver informa-
tion about the feature in the form of the magnitudes of the parameters that control
the analysis. It is also necessary to relate these parameters to the actions that
we must apply to the 3D-model to recreate motion and expressions. If there are
many different image-processing techniques to analyze face features, there are
at least as many corresponding feature motion models. These motion models
translate the results into face animation parameters.
Malciu and PrĂȘteux (2001) track face features using snakes. Their snakes are
at the same time deformable models containing the Facial Definition Parameters
(FDPs) defined on the MPEG-4 standard (MPEG-4, 2000). Their technique is
capable of tracking FDPs very efficiently, but it does not give out the FAPs that
would animate the model to generate the observed feature motion. Chou, Chang
and Chen (2001) go one step further. They present an analysis technique that
searches for the points belonging to the projection of a simple 3D-model of the
lips, also containing the FDPs. From the projected location they derive the FAPs
that operate on them to generate the studied motion. Since one FAP may act on
more than one point belonging to their lip model, they use a least-square solution
to solve for the magnitudes of the FAPs involved. Goto, Kshirsagar and
Magnenat-Thalmann (1999) use a simpler approach where image processing is
reduced to the search of edges and the mapping of the obtained data is done in
terms of motion interpretation: open mouth, close mouth, half-opened mouth, etc.
The magnitude of the motion is related to the location of the edges. They extend
this technique to eyes, developing their own eye motion model. Similarly,
eyebrows are tracked on the image and associated to model actions.
Estimators
Once facial expressions are visually modeled by some image processing
technique, we obtain a set of parameters. The mapping of these parameters onto
the corresponding face animation parameters is done by solving for the estimator
that relates face motion parameters to analysis parameters. To establish the
mapping relationship there must be a training process. Among others we find the
following estimators: linear, neural networks and RBF networks. We will
describe the first two in detail.
 
Search WWH ::




Custom Search