Information Technology Reference
In-Depth Information
f x su x
0 f y u y
001
,
A =
(9.2)
f x ,
f y R
being the focal lengths, u x ,
u y R
the coordinates of the principal
the aspect ratio. Similarly, q i projects onto F at the point
point, and s
R
p i =( p i , 1 ,
p i , 2 ,
1) T
3
R
given by
d i p i = AO T ( q i
c )
(9.3)
where d i is the depth of the point with respect to F . The camera pose between F
and F is described by the pair
3
( R
,
t )
SO (3)
× R
(9.4)
where R and t are the rotational and translational components expressed with respect
to F and given by
R = O T O
O T ( c
c )
(9.5)
t =
O T ( c
c )
( t is normalized because, by exploiting only the image projections of the points
q 1 ,...,
q N and the matrix A , the translation can be recovered only up to a scale
factor). Let p
p R
2 N
,
be the vectors defined as
p 1 , 1
p 1 , 2
.
p N , 1
p N , 2
p 1 , 1
p 1 , 2
.
p N , 1
p N , 2
p =
p =
,
.
The goal condition of an eye-in-hand visual servo system can be expressed as
p ε
p
(9.6)
is a threshold chosen to limit the distance between p and p (for exam-
ple, via the infinity norm).
This chapter addresses the computation of upper and lower bounds of the worst-
case robot positioning error introduced by image measurement errors through the
goal condition (9.6). In particular, we consider the worst-case rotational error
where
ε R
p ε
s r (
ε
)=sup
R
t θ
s.t.
p
(9.7)
,
where
θ
[0
, π
] is the angle in the representation of R via exponential coordi-
nates, i.e.
R = e [
θ
u ] ×
(9.8)
Search WWH ::




Custom Search