Information Technology Reference
In-Depth Information
Desired image
Current image
Model input
Camera velocity
Model output:
Predicted features over Np
k
k
k+2
k+3
k+1
k+1
k+1
k+4
k+2
k+3
k+2
k+2
k+3
k+4
k+5
Time
Fig. 20.2
Principle of image prediction ( N p = 3 , N c = 2)
20.3.1
Nonlinear Global Model
The control input of the free-flying process is the camera velocity
applied to the
camera. Here, the state of the system can be the camera pose in the target frame:
x =( P x
τ
z ). The dynamic equation can be approximated by 1
,
P y
,
P z
, Θ
, Θ
, Θ
x
y
x ( k + 1)= x ( k )+ Te
τ
( k )= f ( x ( k )
, τ
( k ))
.
(20.14)
The output is the visual features expressed in the image plane noted s m . In the case
of a perspective camera, the output equation for one point-like feature in normalized
coordinates can be written as
s m ( k )= u ( k )
v ( k )
= X ( k )
= g ( X ( k )
/
Z ( k )
,
Y ( k )
,
Z ( k ))
,
(20.15)
/
Y ( k )
Z ( k )
where ( X
1) R c are the point coordinates in the camera frame. The rigid trans-
formation between the camera frame and the target frame, noted l ( x ), can easily be
deduced from the knowledge of the camera pose x ( k ). If the point coordinates are
known in the target frame, ( X
,
Y
,
Z
,
,
,
,
Y
Z
1) R t , then the point coordinates in the camera
frame, ( X
,
Y
,
Z
,
1) R c are given by
X
Y
Z
1
X
Y
Z
1
= R ( x ) T ( x )
0 1 × 3
.
= l ( x ( k ))
(20.16)
1
R c
R t
Finally, we obtain
s m ( k )= g
l ( x ( k )) = h ( x ( k ))
.
(20.17)
Equations 20.7 are now completely identified with (20.14) and (20.17). This
dynamic model combines 2D and 3D data and so it is appropriate to deal with
2D and/or 3D constraints. The constraints are respectively expressed on the states
and/or the outputs of the prediction model and are easily added to the optimization
1
The exponential map could be also used to better describe the camera motion.
Search WWH ::




Custom Search