Graphics Reference
In-Depth Information
construct 3D face information for each subject in the training set. Then they
synthesize face images in various lighting to train their face recognizer. Blanz
et al. [Blanz et al.‚ 2002] recover the shape and texture parameters of a 3D Mor-
phable Model in an analysis-by-synthesis fashion. These parameters are then
used for face recognition. This method needs to compute a statistical texture
and shape model from a 3D face database. The illumination effects are mod-
eled by Phong model [Foley and Dam‚ 1984]. When fitting the 3D morphable
face model to an input face image‚ the illumination parameters are estimated
along with texture and shape parameters. However‚ because there are many pa-
rameters to estimate and optimization is non-linear‚ the fitting is computational
expensive and need good initialization.
In general‚ appearance-based methods such as Eigenfaces [Turk and Pent-
land‚ 1991] and AAM [Cootes et al.‚ 1998] need a number of training images
for each subject‚ in order to deal with illumination variability. Previous re-
search suggests that illumination variability in face images is low-dimensional
e.g. [Adini et al.‚ 1997‚ Basri and Jacobs‚ 2001‚ Belhumeur and Kriegman‚
1998‚ Ramamoorthi‚ 2002‚ Epstein et al.‚ 1995‚ Hallinan‚ 1994]. Using spher-
ical harmonics presentation of Lambertian reflection‚ Basri et al. [Basri and
Jacobs‚ 2001] and Ramamoorthi [Ramamoorthi‚ 2002] have obtained theoret-
ical derivation of the low dimensional space. Furthermore‚ a simple scheme
for face recognition with excellent results is presented in [Basri and Jacobs‚
2001]. However‚ to use this recognition scheme‚ the basis images spanning the
illumination space for each face are required. These images can be rendered
from a 3D scan of the face or can be estimated by applying PCA to a number of
images of the same subject under different illuminations [Ramamoorthi‚ 2002].
An effective approximation of this basis by 9 single light source images of a
face is reported in [Lee et al.‚ 2001]. These methods need a number of images
and/or 3D scan data of the subjects in the database. Therefore it would re-
quires specialized equipment and procedures for the capture of the training set‚
thus limiting their applicability. Zhao and Chellappa [Zhao and R.Chellappa‚
2000] use symmetric shape-from-shading. But it suffers from the drawbacks of
shape-from-shading‚ such as the assumption of point lighting sources. Zhang
and Samaras [Zhang and Samaras‚ 2003] propose to recover the 9 spherical har-
monic basis images from the input image. Nevertheless‚ the method in [Zhang
and Samaras‚ 2003] needs a 3D database as in [Blanz et al.‚ 2002] to estimate
a statistical model of the spherical harmonic basis images.
In our framework‚ we show that our face relighting technique can be used
to normalize the illumination effects in face images. The advantage of the
method is that it does not require extra information as in previous methods to
model the illumination effects. In our experiment‚ we demonstrate that this
pre-processing step helps reduce error rates in face recognition under varying
lighting environments.
Search WWH ::




Custom Search