Information Technology Reference
In-Depth Information
guide the motion of a robot with respect to a target object based on the feedback
obtained through a vision system [31]. Usually an error function e (also called task
function [21]) is defined as
s d (11.1)
where s and s d denote the vectors of current and desired features, respectively. The
visual servoing objective is set to regulate this error to zero.
The existing visual servoing techniques are classified into different categories
based on the definition of error function, the underlying control architecture, and the
robot-camera configuration ( i.e. , eye-in-hand vs. eye-to-hand configuration 1 ). For a
detailed review on existing techniques and their classification see [7], [8], and [31].
In summary, the existing approaches can be classified into two main categories: (1)
position-based visual servoing (PBVS) where the feedback is defined in terms of
the 3D Cartesian information derived from the image(s), and (2) image-based visual
servoing (IBVS) where the feedback is defined directly in the image in terms of
image features.
IBVS techniques have better local stability and convergence in presence of cam-
era calibration and modeling errors. However, they suffer from global convergence
problems, and, hence, will break down, in particular when the initial and desired
camera poses are distant [6]. For example some of the image features might leave the
camera's field of view and consequently result in failure of the servoing task. More-
over, there is no direct control on the robot/camera motion induced by the image-
based control law. This might result in infeasible maneuvers due to the robot's joint
limits and/or collision with workspace obstacles.
Amalgamation of path-planning techniques with reactive image-based visual ser-
voing strategies can robustify existing image based tracking systems by accounting
for critical constraints and uncertainties in robotics applications where a high dis-
parity between the initial and desired views of a target is inevitable ( e.g. , target inter-
ception, space docking, reaching and grasping, etc ). The main idea of path-planning
for visual servoing is to plan and generate feasible image trajectories while account-
ing for certain constraints, and then to servo the robot along the planned trajectories.
In this survey we provide a comprehensive technical review on existing and re-
cent approaches to path-planning for visual servoing. For each approach the set
of constraints and the assumptions are explained and the underlying path-planning
technique is discussed along with the issues regarding its integration with the reac-
tive image-based controllers.
In Section 11.2 we study the two sets of critical constraints in visual servo-
ing context: (1) image/camera, and (2) robot/physical constraints. The existence of
such constraints motivates the need for path-planning techniques aimed at mak-
ing the servoing process more robust especially in complex visual servoing scenar-
ios. In Section 11.3 a comprehensive overview of the these approaches and their
e ( t )= s ( t )
1
In an eye-in-hand configuration the camera is mounted on the end-effector of the robot
and robot's motion results in camera's motion while in an eye-to-hand configuration, the
camera is stationary and looks at the end-effector of the robot and robot's motion does not
affect the camera pose [31].
Search WWH ::




Custom Search