Image Processing Reference

In-Depth Information

accurate version of this model by assuming a different affine transform, and thus a different
d
, for

each facet of the mesh. This approximation neglects depth variation across individual facets, rather

than across the whole surface. Under this assumption, the projection of a 3D point
q
i
lying on facet

f
of the mesh can be expressed as

d
f
u
i

v
i

A
I
2
×
2

0
q
i
+

0
,

=

(3.2)

where
d
f
accounts for the average depth of facet
f
. Here, without loss of generality, we expressed

the 3D point in the camera referential, and therefore replaced
R
with the first two rows of the

3

3 identity matrix and the translation with a zero vector. Note that this does not prevent us from

accounting for camera motion. It simply means that it will be interpreted as a rigid motion of the

object of interest.

Under the full perspective model, the projection of a 3D point
q
i

×

is written as

⎡

⎤

u
i

v
i

1

A
I
3
×
3
q
i
+

0
,

⎣

⎦
=

d
i

(3.3)

where the matrix of internal camera parameters
A
is now a 3

×

3 matrix, and each point
i
has a

different depth factor
d
i
.

3.2

3D-TO-2DCORRESPONDENCES

Detecting feature points in images has received enormous attention in the Computer Vision commu-

nity. For most template-based approaches, feature points are typically detected with either the SIFT

keypoints detector
Lowe
[
2004
] or Harris's corners detector
Harris and Stephens
[
1988
]. Once fea-

ture points have been detected in two images, they need to be matched to produce correspondences.

When using SIFT, this can be done by a simple dot-product between specific vector representa-

tions of the feature points. For Harris's corners, methods based on Randomized Trees have proved

efficient
Lepetit and Fua
[
2006
]. From a large set of views obtained by applying random affine trans-

formations to a reference image, a tree that models the relationships between neighboring keypoints

is built. Each leaf-node of the tree then corresponds to a specific keypoint, and matching can be

done by dropping the feature points of a new image down the tree.

Another way to establish correspondences is to first tackle the 2D non-rigid image regis-

tration problem. Non-rigid image registration aims at finding a transformation between two im-

ages of the same surface undergoing different deformations. Different parameterizations have been

proposed to represent the transformation, such as RBFs
Bartoli and Zisserman
[
2004
], thin-plate

splines
Bookstein
[
1989
], or 2D meshes
Pilet
et al.
[
2008
]. Since the resulting warp is defined over

the entire image, discrete correespondences can then be obtained by sampling it. Note that, while

2D non-rigid registration can be thought of as related to 3D non-rigid reconstruction, we believe

Search WWH ::

Custom Search