Information Technology Reference
In-Depth Information
For a complete categorization of image-plane constructions for two-camera sys-
tems, we refer the reader to [7, 10].
4.4
Examples of Use
In our initial work, a preliminary version of the algorithms described above were
implemented on the JHU SHR [19]. Here, we briefly describe the setup, the results,
and its relationship to the more general framework given above. More details can be
found in [1].
The robot was equipped with a vision sensor rigidly attached to the force-sensing
handle on the end effector. We chose to execute two-dimensional tasks parallel to the
image plane, which was in turn arranged to be parallel to two of the base stages of
the robot. We performed experiments using a charge-coupled device (CCD) camera
at the macro scale and a GRIN lens endoscope at the micro scale. The vision sensor
always viewed the task plane, allowing reading of the motion references and real-
time display of task execution (Figure 4.1). On-screen display of the task execution
is useful for operators at the macro scale and essential at the micro scale, as it would
be impossible to complete the task using the naked eye.
The path was furnished to both the system and the user by printing a sine curve
(35 mm amplitude, 70 mm wavelength, and 0.54 mm width) on the task plane (in
black on white paper). At micro scale, it was not possible to print a sufficiently
smooth curve, so we instead embedded a wavy human hair (about 80
m diameter)
in glue on a yellow sheet of paper. In the macro case, the camera was positioned 200
mm from the paper, yielding a pixel footprint of 0.066 mm on the working surface.
In the micro case, the endoscope was about 150
μ
μ
m above the working surface,
yielding a pixel footprint of about 1
m (Figure 4.2).
The center of the image was graphically marked, and users were instructed to
perform path following tasks relative to this mark. The sensor on the handle was
used to record user commands. The force sensor resolution is 12
μ
.
5 mN and force
values are expressed as multiples of this base unit.
Visual tracking (XVision system [11]) was used to measure (in real time) the
local position and tangent direction, d
to the path. Subpixel interpolation was used
to increase the precision of these measurements. The vision and control subsystems
executed on two different personal computers (PCs), and the data exchange was
realized over a local network. The control system operated at 100 Hz, using the
most recent available data from the vision system and handle force sensor.
In terms of our previous formulation, the marked image location at the image
center means that x = 0 Further, the workspace is a plane, ( x
,
2 )
The preferred
direction is given by the tangent measurements from the tracking algorithm. Implic-
itly, the control law used to position the manipulator on the line is
.
u = s
x = s
,
(4.22)
2 is the current location of the visual tracker in the image. Further, s
was constrained to lie along the line through the marked image location, normal
where s
Search WWH ::




Custom Search