Game Development Reference
In-Depth Information
onto the image plane creates the synthetic images (S-) which are shown in Figure
13. It can be seen that the quality of the synthetic faces is sufficient, especially
for smaller changes of the facial expressions compared with the original image
(O-1). For creating a higher quality synthetic face, a more detailed face model
with more triangles is necessary. This high complexity face model is textured
from the original images (O-1) in Figure 14. The synthetic images (S-) from
Figure 14 show the results of animating the high complexity face model. It can
be seen that using the high complexity face model results in a visually more
impressive facial animation, although at the expense of higher processing
complexity.
Conclusions
A framework for automatic 3D face model adaptation has been introduced
which is qualified for applications in the field of visual communication like video
telephony or video conferencing. Two complexity modes have been realized, a
low complexity mode for less powerful devices like a mobile phone and a high
complexity mode for more powerful devices such as PCs. This framework
consists of two parts. In the first part, facial features in images are estimated.
For the low complexity mode, only eye and mouth features are estimated. Here,
parametric 2D models for the eyes, the open mouth and the closed mouth are
introduced and the parameters of these models are estimated. For the high
complexity mode, additional facial features, such as eyebrows, nose, chin and
cheek contours, are estimated. In the second part of the framework, the
estimated facial features from the first part are used for adapting a generic 3D
face model. For the low complexity mode, the 3D face model Candide is used,
which is adapted using the eye and mouth features only. For the high complexity
mode a more detailed 3D face model is used, which is adapted by using all
estimated facial features. Experiments have been carried out evaluating the
different parts of the face model adaptation framework. The standard deviation
of the 2D estimation error is lower than 2.0 pel for the eye and mouth features
and 2.7 pel for all facial features. Tests with natural videophone sequences show
that an automatic 3D face model adaptation is possible with both complexity
modes. Using the high complexity mode, a better synthesis quality of the facial
animation is achieved, with the disadvantage of higher computational load.
Search WWH ::




Custom Search