Information Technology Reference
In-Depth Information
3. a rule for computing preferred directions D = D ( t ) relative to S where
u = 0
if and only if u = 0 (the motion direction is consistent with the control law),
D
then applying the following choice of preferred direction:
D g ( x )=(1
k d )[ D ] f
k d
f
D
u
0
<
k d <
1
(4.13)
yields a virtual fixture that controls the robot toward S and seeks to maintain user
motion within that surface.
Note that a sufficient condition for condition 3 above to be true is that, for all
pairs u = u ( t ) and D = D ( t )
,
[ D ] u = 0
.
This follows directly from the properties of
projection operators given previously.
To provide a concrete example, consider again the problem of moving the tool
tip to a plane through the origin, but let us now add the constraint that the tool z
axis should be oriented along the plane normal vector. In this case, n is a preferred
direction of motion (it encodes rotations about the z axis that are not important for
us). Let z denote the vector pointing along the tool z axis and define a control law
that is
u =
( x
·
n ) n
.
(4.14)
z
×
n
It is easy to see that this law moves the robot into the plane, and also simultaneously
orients the end-effector z axis to be along the normal to the plane. Now, let
D = D ( t )=
n
0
.
0
n
It follows that [ D ] is a basis for translation vectors that span the plane, together with
rotations about the normal to the plane. Therefore [ D ] u = 0 since (4.14) produces
translations normal to the plane, and rotations about axes that lie in the plane. Thus,
the general virtual fixturing rule can be applied.
4.3
Vision-based Virtual Fixtures
Now, we turn to the problem of providing assistance, where the objective defining
the virtual fixture is observed by one or more cameras. To simplify the presentation,
in what follows we assume that we have calibrated the camera internal parameters
and can therefore work in image normalized coordinates [9].
4.3.1
Controlling the Viewer: Pure Translation
Let us start with a well-studied problem. We have a camera fixed to the endpoint
of the manipulator, and the camera observes a fixed, static environment. Our goal is
to control the motion of the end-effector by defining a motion for the camera itself
based on information extracted from the camera image.
Search WWH ::




Custom Search