Information Technology Reference
In-Depth Information
Fig. 9. Plot of CCW tour.
Human interaction with smart wheelchairs is quite complex due to the phys-
ical and sometimes cognitive, restrictions of the user. People with physical dis-
abilities have diculties in handling standard wheelchair controls, usually due to
their lack of strength and/or coordination in upper limbs. For this reason they
need mobile computing devices provided with intelligent interfaces that assist
them as much as possible, frequently assuming navigation tasks [1]. Adaptive in-
terfaces are used to decrease the physical effort required from the user and, simul-
taneously, maximize cognitive user participation for rehabilitation purposes [3].
In the traditional approach, the robot performs the mapping, path planning,
and driving tasks using maps. The interface assumes that users have mental
maps of the environment similar to the ones used by the system and that they
know (or can ask the system) their current position. In addition users must be
able to locate positions (mainly the current one and the goal) in their mental
map [17]. The map-based approach allows the interface to process the orders
from the user in terms of elements of the map.
Nevertheless, the user seldom has a structured mental map of the environ-
ment. This is very common when the user navigates in an unknown building.
This also happens in known environments due to the cognitive diculties in
building these kinds of mental structures. When the navigation is based on bio-
logical behavior, the human and the robot share relative navigational concepts,
such as “follow the corridor,” “find the fire extinguisher,” etc., that are eas-
ier to be mentally processed by the user. In this case, the interface can both,
accept complex orders (such as “go to room number 5”) that include many in-
dividual actions or partial and relative descriptions of the path, such as “go
straight ahead,” “find the window near the lift,” “turn left,” etc. The previously
described biologically inspired model follows this procedure. In this way, it al-
lows for “natural” human-robot interaction, similar to the interaction between
humans.
The user-wheelchair interface designed for this system share, with other smart
wheelchairs, (e.g. the one proposed by Yanco [75]) the physical structure (input
and output devices, dialogue modes, etc.). However, the design of the cognitive
interaction model is based on concepts, well understood at both sides of the
interface, that allow for both complex commands coming from users with a clear
mental map of the place and simple commands issued by users with a partial
knowledge of the environment, which is not possible for wheelchairs based on
classical navigation models.
Search WWH ::




Custom Search