Biomedical Engineering Reference
In-Depth Information
technology for blind users to exemplify how the UX and cognitive concepts can be used
to create innovation and open up possibilities for disabled people.
15.4.2 Sonification of the System
The way the representation of space is processed has long been discussed in the litera-
ture. Many authors analyzed whether spatial representation is directly guided by visual
experience or is indeed related to some other different sensory ways that allow equivalent
spatial representations. Although some authors pointed out that visual experience is of
the utmost importance for the processing of spatial cues (Thinus-Blanc and Gaunet 1997),
there is a broad consensus that the spatial representation of information is independent
from the way the sensory inputs are displayed. In particular, some studies highlighted
that blind subjects show better performance in processing spatial auditory inputs than
sighted people (Zimmer 2001; Avraamides et al. 2004; Mast et al. 2007). Moreover, as Bryant
(1992, 1997) pointed out, blind people show a motion ability functionally equivalent to the
visually guided method in sighted people in performing spatial exploration tasks guided
only by natural acoustic cues. Starting from these suggestions, an amodal system of spatial
representation has been proposed by explaining the involvement of the auditory, haptic,
and kinesthetic information in the spatial mapping processing of people with visual dis-
abilities (Millar 1994). These findings seem to be in contrast with other studies stating both
that spatial understanding is directly related to visual experience and that less efficient
spatial capabilities are due to a lack of visual experience (Ungar et al. 1997).
Starting from these findings, in the last 30 years many research studies in different fields
have been involved in studying different ways to transmit spatial information through
nonvisual sensory channels, paying special attention to sonification methods as an alter-
native to visual and haptic traditional methods. This alternative approach to conveying
spatial information should be especially useful in complex scenarios in which visual
overload or several distractors and incomplete signals due to visual noise could occur. By
conveying information about the spatial location of its source (Brunetti et al. 2005, 2008),
the nature of sound seems to be able to communicate the complexity of either static or
dynamic data representations by keeping their inner relations unchanged. As defined by
Kramer and colleagues, sonification is “the transformation of data relations into perceived
relations in an acoustic signal for the purposes of facilitating communication or interpre-
tation” (1997, p. 3). Since the 1980s, a growing number of works—especially in computer
science and related fields—have focused on the implementation of different ways of con-
veying spatial information through nonvisual sensory channels to improve the nonvisual
access to spatial information. For example, in the late 1980s some researchers designed
and tested different systems based on sounds representing spatial cues, highlighting that
human-computer interaction could be improved by means of nonverbal acoustic signals
on graphic interfaces (Sumikawa et al. 1985; Gaver, 1986; Blattner et al. 1989). Moreover,
in the 1990s Barfield and colleagues (1991) and Brewster (1997, 1998) designed nonspeech
interfaces that were based on the use of earcones, that is to say musical patterns providing
navigational cues into hierarchical menus. By analyzing the recognition performances
occurring after the interaction session of blind users with the interface, the authors ver-
ified the efficacy of the nonverbal acoustic items. Blind subjects showed a high rate of
accuracy in recognition tasks, highlighting that the proposed system seems to be able to
be used in spatial orientation tasks (Barfield et al. 1991; Brewster 1997). To maintain the
correspondence between visual and acoustic spatial positions of items, a broad number of
researchers proposed loudspeaker-based systems. For example, Lakatos (1993) proposed
Search WWH ::




Custom Search