Game Development Reference
In-Depth Information
Facial Feature Extraction
The facial feature extraction scheme used in the system proposed in this chapter
is based on a hierarchical, robust scheme, coping with large variations in the
appearance of diverse subjects, as well as the same subject in various instances
within real video sequences (Votsis, Drosopoulos & Kollias, 2003). Soft a priori
assumptions are made on the pose of the face or the general location of the
features in it. Gradual revelation of information concerning the face is supported
under the scope of optimization in each step of the hierarchical scheme,
producing a posteriori knowledge about it and leading to a step-by-step
visualization of the features in search.
Face detection is performed first through detection of skin segments or blobs,
merging them based on the probability of their belonging to a facial area, and
identification of the most salient skin color blob or segment. Following this,
primary facial features, such as eyes, mouth and nose, are dealt with as major
discontinuities on the segmented, arbitrarily rotated face. In the first step of the
method, the system performs an optimized segmentation procedure. The initial
estimates of the segments, also called seeds, are approximated through min-max
analysis and refined through the maximization of a conditional likelihood func-
tion. Enhancement is needed so that closed objects will occur and part of the
artifacts will be removed. Seed growing is achieved through expansion, utilizing
chromatic and value information of the input image. The enhanced seeds form
an object set, which reveals the in-plane facial rotation through the use of active
contours applied on all objects of the set, which is restricted to a finer set, where
the features and feature points are finally labeled according to an error
minimization criterion.
Experimental Results
Figure 3 shows a characteristic frame from the “hands over the head” sequence.
After skin detection and segmentation, the primary facial features are shown in
Figure 4. Figure 5 shows the initial detected blobs, which include face and mouth.
Figure 6 shows the estimates of the eyebrow and nose positions. Figure 7 shows
the initial neutral image used to calculate the FP distances. In Figure 8 the
horizontal axis indicates the FAP number, while the vertical axis shows the
corresponding FAP values estimated through the features stated in the second
column of Table 1.
Search WWH ::




Custom Search