Information Technology Reference
In-Depth Information
The project research team considered the difficulty that some patients have while
controlling a wheelchair using traditional input devices such as the traditional joys-
tick. Therefore, new ways of interaction between the wheelchair and the user have
been integrated, creating a system of multiple entries based on a multimodal interface.
The system allows users to choose which type of command best fits their needs,
increasing the level of comfort and safety.
A simulated environment was developed that models the intelligent wheelchair and
its environment. In this environment it is possible to test in a safe manner the different
ways of driving the intelligent wheelchair, since the behavior of the simulated intelli-
gent wheelchair is very identical to the behavior of the real intelligent wheelchair.
3
Methodology for Automatic Extraction of User
Interfaces/Profiles
The potential users of the Intelligent Wheelchair have particular characteristics and
constraints. Therefore it is very important to adjust and adapt the way of driving the
intelligent wheelchair to the specific patient. The data acquired when the users are
performing a test drive using a multimodal interface and an intelligent wheelchair will
allow improving the adaptability.
This section presents the features and the global architecture developed of the In-
tellWheels Multimodal Interface.
3.1
IntellWheels Multimodal Interface
There are several publications in the literature of projects related to the issue of adapt-
ing and designing specific interfaces for individuals with severe physical disabilities
[25-27]. Nevertheless, most of these projects present restricted solutions concerning
the accessibility to the user to drive a particular wheelchair. It is common to find just
one solution such as voice recognition, while other focus merely on facial expressions
recognition [27]. Since the physical disability is very wide and specific to each indi-
vidual, it becomes important to provide the greatest possible number of recognition
methods to try to cover the largest possible number of individuals with different
characteristics.
The IntellWheels Multimodal Interface offers five basic input devices: joystick,
speech recognition, recognition of head movements and gestures, the use of a generic
gamepad and facial expressions. In addition, IntellWheels project proposes an archi-
tecture that makes the interface extensible enabling the addition of new devices and
recognition methods in an easy way. It also presents a flexible paradigm that allows
the user to define the sequences of inputs to assign to each action, allowing for an
easy and optimized configuration for each user. For example an action of following
the right wall can be triggered by blinking the left eye followed by the expression
“go”.
Fig. 2 shows the IntellWheels Multimodal Interface where all the input devices are
connected.
Search WWH ::




Custom Search