2. Future Work
To improve the 3D face processing framework, future research should be
conducted in the following several directions.
2.1 Improve geometric face processing
The geometric face processing can be improved by utilizing more statistics
of increasingly available 3D face data. One direction is to estimate better 3D
face geometry from a single face image following the approach of Blanz and
Vetter [Blanz and Vetter, 1999]. The improved 3D face estimation can provide
a better 3D-model-fitting for the first video frame in the non-rigid face tracking.
The more accurate 3D face model will also improve the performance of face
relighting techniques described in Section 1 of Chapter 8.
Another direction is to collect motion capture data of more subjects so that
the model derived from data can better describe the variations across different
people. As a result, facial motion analysis can be used for a larger variety of
people. For synthesis, such database would enable the study the personalized
styles in visual speech or facial expression synthesis.
2.2 Closer correlation between geometry and appearance
In our current 3D face processing framework, we first use geometric model
to process the geometric-level motion. Next, the remaining appearance details
are handled by the flexible appearance model. In this procedure, we assume that
the geometric processing part gives reasonable results so that face textures are
correctly aligned with the geometry. However, this assumption is not always
true. For example, in 3D face motion analysis, if the geometric tracking gets lost,
the extracted face texture would be wrong and the appearance-based analysis
would then fail.
One solution for dealing this problem is to derive constraints of geometric
tracking from appearance models. La Cascia et al. [Cascia et al., 2000] model
the face with a texture-mapped cylinder. The constraint is that the face image
should be a projection of the texture-mapped cylinder. 3D rigid face track-
ing was formulated as a texture image registration problem, in which the global
rotation and translation parameters of the cylinder are estimated. Recently, Vac-
chetti et al. [Vacchetti et al., 2003] use the face appearances in a few key frames
and the preceding frame as constraints for estimating the 3D rigid geometric
face motions. These constraints help to reduce drifting and jittering, even when
there are large out-of-plane rotations and partial occlusions. In these methods,
the texture variation models serve to constrain the feasible solution space of
the geometric tracking. Thus the robustness of the geometric tracking can be
improved. This leads to more robust appearance-based motion analysis which