Graphics Reference
In-Depth Information
Figure 8.9. Examples of Yale face database B [Georghiades et al., 2001]. From left to right,
they are images from group 1 to group 5.
simplest image correlation as the similarity measure between two images, and
nearest neighbor as the classifier. For the 10 subjects in the database, we take
only one frontal image per person as the training image. The remaining 630
images are used as testing image.
The images are divided into five subsets according to the angles of the light
source direction from the camera optical axis. For the five subset, the values
of angle are: (1) less than 12 degrees; (2) between 12 and 25 degrees; (3)
between 25 and 50 degrees; (4) between 50 and 77 degrees; and (5) larger than
77 degrees. The face images in the Yale database contain challenging examples
for relighting. For example, there are many images with strong cast shadows;
or with saturated or extremely low intensity pixel values Figure 8.9 shows one
sample image per group of the Yale face database B.
We compare the recognition results using original images and relighted im-
ages. The experimental results are shown in Figure 8.10. We can see that the
recognition error rates are reduced after face relighting in all the cases. When
the lighting angles become larger, the illumination effects in original test images
are more different from the training images. Therefore, the recognition error
rates become larger. In these scenarios, our relighting technique significantly
reduces the error rates, even in very extreme conditions (e.g. lighting angles
larger than 77 degrees).
In summary, our face relighting technique provides an efficient and effective
way to normalize the illumination effects for face recognition. Compared to
other approaches, this method has the following advantages: (1) it does not
assume simple point lighting source model; instead it works under natural il-
lumination in the real world; (2) it only needs one training image per person
to model illumination effects without the need of multiple training images or
3D face database. In the future, we plan to further improve the results under
extreme lighting conditions. To deal with cast shadows, techniques using more
basis images such as that in [Debevec et al., 2000] will be useful. The differ-
ence between the 3D generic face geometry and the actual one may introduce
artifacts. We are planning on estimating a personalized geometric model from
Search WWH ::




Custom Search