Biomedical Engineering Reference
In-Depth Information
of static tactile data, whereas active sensing is used when motion is involved. Okamura
and Cutkosky [69] stated that motion is required when features, particularly those which
are small, cannot be sensed accurately through static touch. One example of this is the
evident fact that sliding fingers against any edge will yield far more information about
an object's sharpness than can ever be determined by static contact alone. A dexterous
hand can manipulate an object and retrieve information on its properties which would
otherwise be impossible to determine. For example, by tilting an object, its weight, or
center of gravity, can be estimated and by running a finger over its surface, its friction
and texture can be approximated, and so on.
From Gibson's point of view and his experience with patterns and objects, haptics
does not only consist of kinesthetic and cutaneous sensing receptor surfaces; they are
actors and perceivers in the real world. Active touch differs from passive touch in the
intentionality of our exploratory behaviors. The distinction is somewhat different from
the older reafferance/exafference separation in which the former is the self-stimulation of
an animate life form as a result of the movements of its own body whereas the latter is
due to the result of external factors. Gibson thought that active touch revealed things in
either a real, or at least imaginary, way whereas passive touch did not commit itself to
any immediate or tangible feeling.
Klatzky and Lederman [70] categorized the properties of the object material in terms of
texture, compliance, apparent temperature (due to heat flow), and geometric proportions
(size and shape). Indeed, in order for any haptic object identification system to be deemed
worthwhile and useful, it must first demonstrate that it is able to provide all this informa-
tion. Lederman and Klatzky conducted studies that directly addressed the availability of
material properties under haptic exploration of objects [71]. The procedure was based on
a paradigm from vision, called visual search, as adapted by Treisman and Gormican [72].
In a visual display, it is relatively straightforward to vary the number of items by adding
distracters to, or subtracting them from, the field of view. It is however, less obvious as to
how the number of items in a haptic display can be varied and it makes the investigation
of haptic identification more difficult.
In work conducted by Klatzky and Lederman [73], object properties are divided into
four sets: (i) material discrimination (rough surface vs. smooth surface or warm vs. cool
surface); (ii) discrimination of a flat surface from a surface with an abrupt spatial dis-
continuity, such as a raised bar; (iii) discrimination of two- or three-dimensional spatial
layout, such as whether a raised dot was on the left or right of an indentation, and (iv)
discrimination between continuous three-dimensional contours, such as a curved surface
as opposed to a flat surface.
Together, these various research findings suggest that the role of material in haptic
object identification could contribute substantially to the high level of performance that
is observed. However, in order for material information about the stimulus object to be
important in identification, the representation of objects in memory must also incorporate
material information that can be matched with the stimulus.
Material properties could be used to represent and identify an object, although one prob-
lem with this idea is that the name given to the object is primarily dependant on its shape
and geometric property. For example, it is well known that people use naming to divide
objects into categories whose members share attributes, and that shape is a particularly
important attribute when an object is categorized by its most common name [54].
Search WWH ::




Custom Search