Graphics Reference
In-Depth Information
algorithms to the produced fits. By applying and comparing different face-matching methods,
we end up with a complete 3D face recognition system with high recognition rates for all three
data sets.
Starting with the 3D face scans from a data set, we apply our face segmentation method
(Section 4.2). The presented face segmentation method correctly normalized the pose of all
face scans and adequately extracted the tip of the nose in each of them. For the 953 scans of the
UND face set, we evaluated the tip of the nose extraction by computing the average distance
and standard deviation of the 953 automatically selected nose tips to our manually selected
nose tips, which was 2.3
1.2 mm. Because our model-fitting method aligns the face scan to
the mean face, and at a later stage to the coarsely fitted face instance, these results are good
enough.
We evaluated the face model fitting as follows. Each segmented face was aligned to S and
the coarse-fitting method of Section 4.3.3 was applied. After the improved alignment of the
scan data to S coarse , the fine-fitting method of Section 4.3.4 was applied to either the entire
face or to each of the individual components. For a fair comparison the same post-processing
steps (Sect. 4.3.5) were applied to the final S fine instances. Figure 4.4 shows qualitative better
fits when multiple components are used instead of a single component. Globally, by looking
at the more frequent surface interpenetration of the fitted model and face scan, which means a
tighter fit. Locally, by looking at facial features, such as the nose, lips, and eyes. Note that our
fitting method correctly neglects facial hair and interpolates holes, which is often a problem
for 3D face recognition methods.
To quantitatively evaluate the produced fits, we determined the RMS distance (Eq. 4.2) for
each of the fitted models to their face scan d rms ( S final ,
±
scan). To report merely the measurements
in overlapping face regions, the points paired with boundary points are not included. Also
outliers, point-pairs with a distance larger than 10 mm, are not taken into account. The RMS
errors are shown in Table 4.1. Note that the UND scans have a higher resolution and thus
smaller point-to-point distances, the RMS distances are therefore lower for the UND set than
for the GAVAB and BU-3DFE sets.
Blanz et al. (2007) reported the accuracy of their model-fitting method using the average
depth error between the depth images of the input scan and the output model, neglecting point-
pairs with a distance larger than 10 mm. To compare the accuracy of our method with their
method, we produced cylindrical depth images (as in Fig. 4.3 c ) for both the segmented face
scan and the fitted model and computed the average depth error
|
θ,
θ,
|
,
excluding the outliers. Because of the surface mesh resampling, these projection errors
(Table 4.2) are resolution independent. The GAVAB set has more acquisition artifacts causing
higher projection errors, with high maximum projection errors in particular. The available
BU-3DFE scans were heavily smoothed right after the acquisition process, causing lower
projection errors than the high resolution UND scans.
The errors reported in Tables 4.1 and 4.2 are all in favor of the multiple component fits with
an average RMS gain of 0.2 mm per point pair. However, only a marginal gain in accuracy
is acquired when seven components are used instead of four. So, with the use of multiple
components we can increase the model's expressiveness to some extend.
Comparison. Blanz et al. (2007) reported a mean depth error over 300 UND scans of
1.02 mm when they neglected outliers. For our fitted single component to UND scans the error
d avr . depth is 0.65 mm, which is already more accurate. For the fitted multiple components these
errors are 0.47 and 0.43, for four and seven components, respectively.
d scan (
y )
d final (
y )
Search WWH ::




Custom Search