Biomedical Engineering Reference
In-Depth Information
6.6 Grasping
Once the hand is suitably close to an object, the robot can certainly plan to act on
it. Eventually, grasping is the end goal of moving a complex body in the environ-
ment, to act and change the “state” of objects to achieve a number of possible goals
as e.g., fetch an object, move it, bring it somewhere else, etc. Here we focus on
power grasp, which is characterized by large areas of contact between the object
and the surfaces of the palm and fingers. Our method seeks object regions that
match the curvature of the robots palm. The entire procedure relies on binocular
vision, which provides a 3D point cloud of the visible part of the object. The
obtained point cloud is segmented in smooth surfaces. A score function measures
the quality of the graspable points on the basis of the surface they belong to. A
component of the score function is learned from experience and it is used to map the
curvature of the object surfaces to the curvature of the robot's hand. The user can
further provide top-down information on the preferred grasping regions (e.g.,
handles). We guarantee the feasibility of a chosen hand configuration by measuring
its manipulability. We prove the effectiveness of the proposed approach by tasking
a humanoid robot to grasp a number of unknown real objects.
Before deciding how to grasp an object, we need to define where to grasp
it. Usually the answer to this problem is not unique; in fact, if one has to lift an
object, he can put his hand in several different positions. If we limit our analysis to
power grasp, then the number of possible locations gets smaller, but still, there is no
a universally accepted rule on where to take an object. Several factors influence
how a person performs a grasp (Cutkosky and Howe 1990 ); some of them regard the
object shape and dimension, while others regard the weight of the object and its
surface roughness as well as the task at hand.
In our implementation we take into account some of these factors in the process
of extracting a set of significant points on the object surface. We first create a 3D
point cloud of the visible part of the object (from a single viewpoint), using the
stereo vision system of the iCub. We subsequently compute a minimum bounding
box enclosing the point cloud, estimating the approximate dimension and orienta-
tion of the object with respect to the robot's root frame. Unsupervised learning
techniques are employed to segment the reconstructed cloud in smooth regions. We
finally look for the regions that best approximate the robotic palms curvature. As
shown in Roa et al. ( 2012 ) and Chalon et al. ( 2010 ), spreading the fingers and
enclosing the object against the palm significantly helps in obtaining a stable grasp.
Hence we limit our search to the most compatible surfaces under the criterion that
they have to match the palm size and curvature. Firstly, we guarantee that the hand
lies in a visible region; therefore we select, among the obtained smooth regions,
those large enough as compared to the size of the palm. We then apply a uniform
sampling on the selected clusters of points, retrieving a smaller number of points
along with their normals. Each point here represents the center of a planar region
computed on the point's neighborhood with an area similar to the area of the robots
palm. This set of points is ranked with the help of a score function, which takes into
Search WWH ::




Custom Search