Graphics Reference
In-Depth Information
exploiting the fact that straight lines in the real world remain straight in the im-
age. Radial and tangential distortions can be directly inferred from deviations from
straightness in the image. These first calibration methods based on bundle adjust-
ment, which may additionally determine deviations of the photographic plate from
flatness or distortions caused by expansion or shrinkage of the film material, are
usually termed 'on-the-job calibration' (Clarke and Fryer, 1998 ).
1.4.2 The Direct Linear Transform (DLT) Method
In its simplest form, the direct linear transform (DLT) calibration method introduced
by Abdel-Aziz and Karara ( 1971 ) aims for a determination of the intrinsic and ex-
trinsic camera parameters according to ( 1.1 ). This goal is achieved by establishing
an appropriate transformation which translates the world coordinates of known con-
trol points in the scene into image coordinates. This section follows the illustrative
presentation of the DLT method by Kwon ( 1998 ). Accordingly, the DLT method
assumes a camera described by the pinhole model, for which, as outlined in the
introduction given in Sect. 1.1 , it is straightforward to derive the relation
u
ˆ
ˆ
x
x 0
=
.
v
cR
y
y 0
(1.26)
b
z
z 0
In ( 1.26 ), R denotes the rotation matrix as described in Sect. 1.1 ,
v the metric
pixel coordinates in the image plane relative to the principal point, and x , y , z are
the components of a scene point W x in the world coordinate system. The values x 0 ,
y 0 , and z 0 can be inferred from the translation vector t introduced in Sect. 1.1 , while
c is a scalar scale factor. This scale factor amounts to
u and
ˆ
ˆ
b
c
=−
z 0 ) ,
(1.27)
r 31 (x
x 0 )
+
r 32 (y
y 0 )
+
r 33 (z
where the coefficients r ij denote the elements of the rotation matrix R . Assuming
rectangular sensor pixels without skew, the coordinates of the image point in the
sensor coordinate system, i.e. the pixel coordinates, are given by u
u 0 =
k u ˆ
u and
v
v , where u 0 and v 0 denote the position of the principal point in the sensor
coordinate system. Inserting ( 1.27 )into( 1.26 ) then yields the relations
v 0 =
k v ˆ
b
k u
r 11 (x
x 0 )
+
r 12 (y
y 0 )
+
r 13 (z
z 0 )
u
u 0 =−
r 31 (x
x 0 )
+
r 32 (y
y 0 )
+
r 33 (z
z 0 )
(1.28)
b
k v
r 21 (x
x 0 )
+
r 22 (y
y 0 )
+
r 23 (z
z 0 )
v
v 0 =−
r 31 (x
x 0 )
+
r 32 (y
y 0 )
+
r 33 (z
z 0 )
Rearranging ( 1.28 ) results in expressions for the pixel coordinates u and v which
only depend on the coordinates x , y , and z of the scene point and 11 constant pa-
rameters that comprise intrinsic and extrinsic camera parameters:
Search WWH ::




Custom Search