Information Technology Reference
In-Depth Information
I where I is a vector that
between the current and desired images (that is e = I
contains image intensity of all pixels).
However, considering the whole image as a feature has previously been con-
sidered [18, 9]. As in our case, the methods presented in [9, 18] did not require
a matching process. Nevertheless they differ from our approach in two important
points. First, they do not use directly the image intensity since an eigenspace de-
composition is performed to reduce the dimensionality of image data. The control
is then performed in the eigenspace and not directly with the image intensity. More-
over, this way to proceed requires the off-line computation of this eigenspace and
then, for each new frame, the projection of the image on this subspace. Second, the
interaction matrix related to the eigenspace is not computed analytically but learned
during an off-line step. This learning process has two drawbacks: it has to be done
for each new object and requires the acquisition of many images of the scene at
various camera positions. Considering an analytical interaction matrix avoids these
issues.
An interesting approach, which also consider the pixels intensity, has been re-
cently proposed in [15]. This approach is based on the use of kernel methods that
lead to a high decoupled control law. However, only the translations and the rota-
tion around the optical axis are considered whereas, in our work, the 6 DOF are
controlled. Another approach that does not require tracking nor matching has been
proposed in [1]. It models collectively feature points extracted from the image as a
mixture of Gaussian and try to minimize the distance function between the Gaussian
mixture at current and desired positions. Simulation results show that this approach
is able to control the 3 DOF of robot (and the 6 DOF under some assumptions).
However, note that an image processing step is still required to extract the current
feature points. Our approach does not require this step. Finally, in [2], the authors
present an homography-based approach to visual servoing. In this method the image
intensity of a planar patch is first used to estimate the homography between current
and desired image which is then used to build the control law. Despite the fact that,
as in our case, image intensity is used as the basis of the approach, an important
image processing step is necessary to estimate the homography. Furthermore, the
visual features used in the control law rely on the homography matrix and not di-
rectly on the luminance.
In the remainder of this chapter we first compute the interaction matrix related to
the luminance in Section 5.2. Then, we reformulate the visual servoing problem into
an optimization problem in Section 5.3 and propose a new control law dedicated to
the specific case of the luminance. Section 5.4 shows experimental results on various
scenes for several positioning tasks.
5.2
Luminance as a Visual Feature
The visual features that we consider here are the luminance I of each point of the
image, that is
s ( r )= I ( r )=( I 1 ,
I 2 ,···,
I N )
(5.3)
Search WWH ::




Custom Search