Game Development Reference
In-Depth Information
proach for automatically adapting a generic face model to individual faces
without these kinds of limitations has not been developed, yet.
In this chapter, a complete framework for 3D face model adaptation based on
monocular facial images without human interaction is addressed. Limitations like
a closed mouth or neutral facial expressions do not exist for the proposed
framework. In this framework, a two-step approach for face model adaptation
is introduced. In the first step, facial features are estimated from the first frames
of the video sequence. In the second step, the 3D face model is adapted using
these estimated facial features. Furthermore, face model adaptation is done with
two complexity modes. For the low complexity mode, the face model Candide
(Rydfalk, 1987), with a small number of triangles is used, and only eye and mouth
features are estimated, since these features are most important for visual
impression. For facial animation in the low complexity mode, Action Units are
used. For the high complexity mode, an advanced face model with a higher
number of triangles is used, and other additional facial features like chin and
cheek contours, eyebrow and nose features are further estimated. In the high
complexity mode, a muscle-based model is imposed for facial animation.
This chapter is organized as follows. The next section presents the two face
models of different complexities and their animation parameters. The section
following describes algorithms for facial feature estimation. Special emphasis is
given to the estimation of eye and mouth features. The fourth section presents
the algorithms for 3D face model adaptation using the facial features estimated
in the third section. Experimental results are presented in the final section.
3D Face Models
For visual communication like video telephony or video conferencing, a real
human face can be represented by a generic 3D face model that must be adapted
to the face of the individual. The shape of this 3D face model is described by a
3D wireframe. Additional scaling and facial animation parameters are aligned
with the face model. Scaling parameters describe the adaptation of the face
model towards the real shape of the human face, e.g., the size of the face, the
width of the eyes or the thickness of the lips. Once determined, they remain fixed
for the whole video telephony or video conferencing session. Facial animation
parameters describe the facial expressions of the face model, e.g., local
movements of the eyes or mouth. These parameters are temporally changed with
the variations of the real face's expressions. In this framework, face model
adaptation is carried out in two complexity modes, with a low complexity face
model and a high complex face model. These two face models are described in
detail below.
Search WWH ::




Custom Search