Graphics Reference
In-Depth Information
means for feature extraction. In fact, the segmented regions and their attributes could differ
from a range image to another depending on facial expressions.
2.4.3 Point 3D Facial Features
A well-known paradigm to object recognition is to first detect points, called key points. Then,
point features are extracted from the local neighborhood of the detected key points, called
descriptors or point signatures. For matching a pair of images, neither all the key points need
to be detected (in both images) nor all of them need to match. Nevertheless, a high rate of key-
point detection and matching translates into a high similarity measure. Approaches following
this paradigm are less sensitive to clutter and occlusions and do not require the segmentation
of the object from the background.
Many approaches to 3D face recognition follow this paradigm. In addition to the afore-
mentioned advantages of key-point based recognition, they show a degree of invariance to
facial expressions. For the general case, the key points may not be at anatomically distinctive
locations. On the contrary and as a special case, the key points might be the fiducial points
(the landmarks) of the face.
Sphere and surface intersection method: One of the earliest approach to 3D face recognition
that might be categorized in this category is the one by seng Chua et al. (2000). While, the
notion of key-point detection was not strongly present, the approach is largely based on
point descriptors. The approach defines a point descriptor as the closed curve formed by
intersecting a sphere with the 3D facial surface, the sphere center is placed at the 3D point.
The curve of intersection is represented by its orthogonal distance to the tangential plane and
is parametrized by an angle
). The reference vector is defined as
the maximum distance from the 3D point to any point on the projected curve. Two descriptors
d 1 (
θ
from a reference vector, d (
θ
θ
) and d 2 (
θ
) are matched (possibly) by finding the integral of their absolute difference,
|
.
During training, the approach considers only the point descriptors that can match under
different facial expressions. For an efficient retrieval during matching, the selected point
descriptors are stored in a 3D table along with the identities of their training scans (the
gallery). The indexing dimensions of the table are the number of the local minimums of the
descriptors, the number of their local maximums, and the number of their minimums plus
maximums. For matching a probe scan against the gallery identities, the point descriptors are
found at every point of the probe. Then, all the point descriptors are matched against those in
the 3D table and those that match then vote for the identity of the probe.
d 1 (
θ
)
d 2 (
θ
)
|
d
θ
Principal directions method: The principal directions are used by Mian et al. (2008) to detect
repeatable and descriptive (indicated by the non planarity of the local surface) key points on
facial surfaces. The detections of a key point starts by cropping the local surface around a
candidate key point using a sphere. A plane is then fitted to the local surface in order to
uniformly sample the surface according to a regular grid, resulting in a fixed number n of
samples. Next, the principal directions of the sampled local surface, E
[ e 1 e 2 e 3 ], are found
in a similar fashion as described in Equation 2.59. After that, the 3D points of the sampled
surface are projected on the principal directions, p i =
=
E ( p i
p ), where i
=
1
...
n , and p is
Search WWH ::




Custom Search