Digital Signal Processing Reference
In-Depth Information
Step3. Compute the shrinkage factor
j b at each directional subbands of each scale
'
by using Eq. (4-6).
Step4. Reconstruct illumination
Lxy from the modified NSCT coefficients by
'
(, )
inverse NSCT.
Step5. Extract illumination invariant
R xy by Eq.2 and use the obtained
Lxy
'
(, )
'
(, )
for face recognition.
3
Experimental Results and Discussions
3.1
Datasets
To evaluate the performance of the proposed method for illumination invariant
extraction, we have applied it to two well-known databases, the Yale face B database
and the CMU PIE database. In the phase of recognition, PCA is employed to extract
global feature, and the nearest neighbor classifier based on Euclidean distance is used
for classification. The important statistics of these databases are summarized below:
The Yale face database B totally contains images of 10 individuals in nine poses
and 64 illuminations per pose. We only use the frontal face images under 64 illu-
mination conditions for evaluation. All images are resized to 100×100 and roughly
aligned between subjects. Generally, all the face images per subject can be divided
into five subsets according to the angle of the light source directions, which are:
Subset 1 (0-12°), Subset 2 (13-25°), Subset 3 (26-50°), Subset 4 (51-77°) and Sub-
set 5 (above 78°). Fig. 1 shows the images under different illumination angles.
Fig. 1. The images under different illumination angles for the same person in the YaleB data-
base
The CMU PIE database contains 68 subjects with 1428 frontal face images under
21 different illumination conditions with background lighting off. All images from
the database were simply aligned and resized to 100×100. Fig. 2 shows the differ-
ent lighting images for a single subject.
Fig. 2. The images under different illumination angles for the same person in the CMU PIE
database
Search WWH ::




Custom Search