Game Development Reference
In-Depth Information
determined by projecting the face model in the image plane. Then, the head tilt
is adapted. The angle between a line through the eye middle positions and the
horizontal image line is a measure for the head tilt. Using the measured angle in
the image, the tilt of the face model is adapted. After that, the face size is scaled.
The distance between the eye middle positions is used for scaling the face width,
the distance between the center of the eye middle positions and the mouth middle
position for scaling the face height. The next step of face model adaptation is the
adjustment of the jaw rotation. Here, the jaw of the face model is rotated until
the projection of the face model's mouth opening onto the image plane matches
the estimated mouth opening in the image. For scaling of the chin and cheek
contours, the chin and cheek vertices of the face model are individually shifted
so that their projections match the estimated face contour in the image. In order
to maintain the proportions of a human face, all other vertices of the face model
are shifted, too. The amount of shift is reciprocal to the distance from the vertex
to the face model's chin and cheek. Finally, scaling and facial animation
parameters for the rest of the facial features (eyes, mouth, eyebrows, and nose)
are calculated by comparing the estimated facial features in the image with
projections of the corresponding features of the face model. For scaling, the
width, thickness and position of the eyebrows, the width of the eyes, the size of
the iris, the width, height and depth of the nose, as well as the lip thickness are
determined. For facial animation, the rotation of the eyelids, the translation of the
irises, as well as facial muscle parameters for the mouth are calculated. These
scaling and facial animation parameters are then used for the adaptation of the
high complexity face model.
Experimental Results
For evaluation of the proposed framework, the head and shoulder video
sequences Akiyo and Miss America with a resolution corresponding to CIF and
a frame rate of 10Hz are used to test its performance.
Estimation of Facial Features
Figure 9 shows some examples of the estimated eye and mouth features over the
original image with the sequence Akiyo and Miss America . For accuracy
evaluation, the true values are manually determined from the natural video
sequences, and the standard deviation between the estimated and the true values
is measured. The estimate error for the pupil positions is 1.2 pel on average and
Search WWH ::




Custom Search