Information Technology Reference
In-Depth Information
other information than the image intensity (that is the pure luminance signal) can be
considered to control the robot motion.
Classically, to achieve a visual servoing task, a set of visual features has to be
selected from the image in order to control the desired degrees of freedom (DOF).
A control law has also to be designed so that these visual features s reach a desired
value s , leading to a correct realization of the task. The control principle is thus to
regulate to zero the error vector e = s
s . To build this control law, the interaction
matrix L s is required. For eye-in-hand systems, this matrix links the time variation
of s to the camera instantaneous velocity v
s = L s v
(5.1)
with v =( v
its angular velocity.
Thereafter, if we consider the camera velocity as input of the robot controller, the
following control law is designed to try to obtain an exponential decoupled decrease
of the error e
, ω
) where v is the linear camera velocity and
ω
λ L s e
v =
(5.2)
where
is a proportional gain that has to be tuned to minimize the time-to-
convergence, and L s is the pseudo-inverse of a model or an approximation of L s [4].
As it can be seen, visual servoing explicitly relies on the choice of the visual
features s (and then on the related interaction matrix); that is the key point of this
approach. However, with a vision sensor providing 2D measurements x ( r k ) (where
r k is the camera pose at time k ), potential visual features s are numerous, since
2D data (coordinates of feature points in the image, contours, moments,...) as well
as 3D data provided by a localization algorithm exploiting x ( r k ) can be consid-
ered. In all cases, if the choice of s is important, it is always designed from the
visual measurements x ( r k ). However, a robust extraction, matching (between x ( r 0 )
and x = x ( r )where r is the camera desired pose) and real-time spatio-temporal
tracking (between x ( r k 1 )and x ( r k )) have proved to be a complex task, as testi-
fied by the abundant literature on the subject (see [17] for a recent survey on this
subject). This image processing is, to date, a necessary step and considered also as
one of the bottlenecks of the expansion of visual servoing. That is why some works
tend to alleviate this problem. A first idea is to select visual features as proposed
in [11, 14] or as in [19] to only keep visual features that are tracked with a high
confident level (see also [7] where a more general approach is proposed). However,
the goal of such approaches is not to simplify the image processing step but to take
into account that it can fail. A more interesting way to avoid any tracking process is
to use non geometric visual features. In that case, parameters of a 2D motion model
are used as in [21, 24, 23, 8]. Nevertheless, such approaches require an important
and complex image processing step. Removing the entire matching process is only
possible when using directly the luminance as we propose.
Indeed, to achieve this goal we use as visual features the simplest feature that
can be considered: the image intensity itself. We therefore call this new approach
photometric visual servoing . In that case, the visual feature vector s is nothing but
the image while s is the desired image. The error e is then only the difference
λ
Search WWH ::




Custom Search