Information Technology Reference
In-Depth Information
(see Figure 5.8). Therefore the light direction is aligned with the camera optical
axis as described on Figure 5.2. This is the unique light in the scene. Note that,
obviously, its direction is no more constant with respect to the scene as in Section
5.4.1. The initial positioning error and the desired pose are still unchanged (but with
Z = 70 cm). The interaction matrix has been estimated at the desired position using
(5.24) to compute L 1 while L 2 = 0 (see the very end of Section 5.2). For all the
experiments using the complete interaction matrix we used k = 100 and K s = 200
(see (5.19)).
As it can be seen on Figure 5.9(f), the specularities are very important and con-
sequently their motions in the image are important (for example the specularity can
be seen at the bottom of the image in the first image whereas it has moved to the
middle at the end of the positioning task). It also almost saturates the image mean-
ing that few information are available around the specularity. The behavior of the
control law is better when the complete illumination model is considered since the
convergence is faster (see Figure 5.9(a)). It is also confirmed by observing the po-
sitioning errors (compare Figure 5.9(b) with Figure 5.9(c) and Figure 5.9(d) with
Figure 5.9(e)).
Note that tracking tasks and other positioning tasks (when the lighting is not
mounted on the camera) have been considered in [6]. These results show, here again,
the benefit of using a complete illumination model instead of using the classical
temporal luminance constancy.
5.5
Conclusion and Future Works
We have shown in this chapter the benefit of using the photometric visual servoing.
This new visual servoing scheme avoids complex image processing, leaving only
the image spatial gradient to compute. It also avoids a learning step required with
previous approaches based on the use of the image intensity as visual features. This
new visual servoing has also other important advantages. Concerning positioning
tasks, the positioning errors are always very low. Moreover, this approach is not
sensitive to partial occlusions and to coarse approximations of the depths required
to compute the interaction matrix. Let us point out that the behavior of the robot is
not disturbed by complex illumination changes since the interaction matrix has been
derived from a suitable illumination model.
Future work will concern the case when the intensity of the lighting source may
vary during the servoing.
Acknowledgements. The authors wish to thank Fran¸ois Chaumette and Seth Hutchinson
for their constructive comments.
References
[1] Abdul Hafez, A., Achar, S., Jawahar, C.: Visual servoing based on gaussian mixture
models. In: IEEE Int. Conf. on Robotics and Automation, ICRA 2008, Pasadena, Cali-
fornia, pp. 3225-3230 (2008)
Search WWH ::




Custom Search