Information Technology Reference
In-Depth Information
A common problem of the above studies is that face images acquired under
controlled conditions (e.g., FERET database) are considered, which usually are
frontal, occlusion-free, with clean background, consistent lighting, and limited fa-
cial expressions. However, in real-world applications, gender classification needs
to be performed on real-life face images captured in unconstrained scenarios;
see Fig. 1 for examples of real-life faces. As can be observed, there are signif-
icant appearance variations in real-life faces, which include facial expressions,
illumination changes, head pose variations, occlusion or make-up, poor image
quality, and so on. Therefore, gender recognition on real-life faces is much more
challenging compared to the case of faces captured in constrained environments.
Few studies in the literature have addressed this problem. Shakhnarovich et al.
[10] made an early attempt by collecting over 3,500 face images from the web. On
this dicult data set, using Haar-like features, they obtained the performance of
79.0% (Adaboost) and 75.5% (SVM). More recently Gao and Ai [11] adopted the
probabilistic boosting tree with Haar-like features, and obtained the accuracy
of 95.51% on 10,100 real-life faces. However, the data sets used in these studies
are not publicly available; therefore, it is dicult to use them as benchmark in
research.
In this paper, we focus on gender recognition on real-life faces. Specifically, we
use a recently built public database, the Labeled Faces in the Wild (LFW) [12].
To the best of our knowledge, this is the first study about gender classification
on this dicult database. Local Binary Patterns (LBP) [13] is employed to ex-
tract facial features. We adopt Adaboost to learn the most discriminative LBP
features, which, when used with SVM, provide the performance of 94.44%. The
public database used in this study enables future benchmark and evaluation.
2 Gender Recognition
2.1 Data Set
The Labeled Faces in the Wild is a database for studying the problem of uncon-
strained face recognition, which contains 13,233 color face photographs of 5,749
subjects collected from the web. Fig. 1 shows example images in the database.
All the faces were detected by the Viola-Jones face detector, and the images are
centered using detected faces and scaled to the size of 250
250 pixels.
We manually labeled the ground truth regarding gender for each face. We did
not consider the faces that are not (near) frontal, as well as those for which it is
dicult to establish the ground truth. Some examples of the removed faces are
shown in Fig. 2. In our experiments, we chose 7,443 face images (2,943 females
and 4.500 males); see Fig. 1 for some examples. As illustrated in Fig. 3, all
images were aligned with a commercial face alignment software [14], and then
the grayscale faces of 127
×
91 pixels were cropped from aligned images for use.
The data set we used will be shared online for public benchmark and evaluation.
×
 
Search WWH ::




Custom Search