Information Technology Reference
In-Depth Information
3
)where
the camera is aligned with the robot base frame. In this case, the relationship be-
tween the motion of the camera and the image motion of a fixed point in space is
given by the well-known image Jacobian relationship [14]:
To keep things simple, first consider the case of pure translation (
v
∈
ℜ
h
=
J
v
,
(4.15)
v
)
∈
ℜ
2
where
h
=(
u
3.
It is again well-known [3, 14] that the rows of
J
span the (two-dimensional) space
of motions that create feature motion in the image, and therefore
,
is the image location of a feature point, and
J
is 2
×
J
is the (one-
dimensional) space of motions that leave the point fixed in the image. Consider,
thus, creating a virtual fixture by defining
D
=
J
(4.16)
in (4.9). From the discussion above, it should be clear that this will create a virtual
fixture that prefers motion in any direction except along the viewing direction. While
it would seem we are done at this point, there is one minor issue: the image Jacobian
depends on the depth of the estimated point. However, if we consider the form of
the Jacobian in this case, we see it can be written thus:
⎡
⎣
⎤
⎦
1
z
0
−
u
z
10
=
1
z
−
u
J
=
.
(4.17)
1
z
−
v
01
−
v
0
z
As such, we see that the term involving
z
is a scale factor and, as noted earlier, our
projection operators are invariant over scaling of their argument. Thus, we have our
first result as follows.
Image plane translation
. If we restrict
v
to be pure translation and choose an
image location
h
=(
u
v
)
,
then implementing (4.9) using
1.
D
=
J
creates a virtual fixture that prefers motion in the plane normal to the
viewing direction defined by
h
and the camera optical center ;
2.
D
=
,
J
creates a virtual fixture that prefers motion along the viewing direction
defined by
h
and the camera optical center.
As a special case, choosing
u
=
v
= 0 yields a virtual fixture parallel to the image
plane, which, as is obvious from (4.17), is the camera
x
-
y
plane.
It is important to note that the image plane virtual fixtures defined above can
be implemented both with and without feedback. That is, if we simply choose a
fixed image location (
e.g.
the origin), then the camera will always move in a plane
orthogonal to the line of sight through the chosen location. On the other hand, if we
choose to track a feature over time, then the motion will always be orthogonal to the
line of sight to that feature.
The other possibility, with a single feature, is to maintain the visual cue at a
specific image location. For example, suppose the goal is to center an observed
Search WWH ::
Custom Search