Information Technology Reference
In-Depth Information
11.3.4
Global Path-planning
The convergence problems and deficiencies of the above path-planning techniques
in accounting for all the constraints in visual servoing tasks motivates the need for
general and global path-planning approaches. A great deal of research has been
carried out on global path-planning for various robotic systems within the path-
planning community, see e.g. [39] and [40]. Here we report on some of these
techniques which have been successfully incorporated into the visual servoing
framework.
A global stabilizing strategy using navigation functions is presented in [17] which
guarantees convergence to a visible goal from almost every initial visible configu-
ration while maintaining viability of all features along the way without following a
predefined reference image trajectory. One should note that constructing such navi-
gation functions is limited to very simple scenarios only.
In [2] a probabilistic roadmap approach has been utilized to plan minimal-
occlusion paths for an in-hand camera with respect to a target object. They em-
ploy the technique proposed in [61] to compute the boundary separating the visible
regions (from where the target is visible) from the occluded regions (from where
the target is not visible due to occlusion by workspace obstacles). Their proposed
algorithm then assigns penalties to camera's trajectories within a given probabilis-
tic roadmap (for camera translation) proportional to the distance the camera travels
while outside the visible region. One should note that camera's orientation or field
of view limits are not taken into account in their proposed approach.
Inspired by the work in [62] on global path-planning with general end-effector
constraints, we incorporated sampling-based global path-planning with visual ser-
voing for a robotic arm equipped with an in-hand camera and proposed a planner
[32] which explores the camera space for camera paths satisfying field of view lim-
its and occlusion constraints, and utilizes a local planner to track these paths in
the robot's joint space to ensure feasible motions of the robot while accounting for
robot's joint limits and collision with obstacles. The result is a search tree as in
[35] which alternatively explores the camera and joint spaces (see Figure 11.1). The
camera path connecting the initial and desired camera poses is then extracted from
the tree and is projected into the image space to obtain sampling-based feature tra-
jectories as a sequence of image waypoints. The image space waypoints are then
time parameterized and scaled using cubic splines . The spline feature trajectories
are tracked using an IBVS technique (as in [49]) at the execution stage.
We demonstrated via simulations and real experiments [32] that the robot is able
to visual servo to a desired configuration while avoiding occlusions of target, keep-
ing the target within the camera's field of view, and avoiding collision with obsta-
cles. Such capabilities enhances the applicability of visual servoing to significantly
more complex environments/tasks. In the proposed approach, we assumed that the
3D model of the target object and the camera's intrinsic parameters are known a
priori . The 3D model of the object is required to estimate the corresponding camera
poses at the initial and desired views. Furthermore, these parameters are required
Search WWH ::




Custom Search