Biomedical Engineering Reference
In-Depth Information
areas—generally used to encode visual spatial information—in response to auditory
stimuli (Merabet et al. 2008). More recently, Zhao of the Human Computer Interaction
Laboratory at the University of Maryland proposed a new sonification technique able to
transmit georeferenced data by means of haptic and auditory signals (Zhao et al. 2004,
2005) and implemented a new system called iSonic , a sonification tool that allows people
with visual disabilities to explore georeferenced maps by means of haptic and auditory
information combined by following exploration techniques. The iSonic usability was
tested on people who had been totally blind for a long time (Zhao et al. 2008), blindfolded,
and congenitally and acquired blind users (Olivetti Belardinelli et al. 2007). Starting
from these usability studies, the authors suggested that during the spatial orientation,
totally or partially blind subjects show their preferences for a body-centered strategy
that was based on corporal reference points, rather than for an allocentric strategy, often
adopted in mental rotation and scanning tasks (Olivetti Belardinelli et al. 2009; Delogu
et al. 2010).
15.4.2.1 Application of a UX Framework for Designing
a Sonified Visual Web Search Engine
In 2009, the Department of Computer Engineering (DIEI) of the University of Perugia and
the Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial
Systems (ECONA) implemented a sonificated system on WhatsOnWeb (Di Giacomo 2007),
an accessible visual web search clustering engine that transmits the indexed information
related to the requested query in one single page by using graph-drawing methods on
semantically clustered data. In this way, WhatsOnWeb is able to overcome the efficiency
limitation of the top-down representation (the Search Engine Report Pages—SERPs)
adopted by the commonly used search engines (Federici et al. 2008, 2010b). The sonificated
version of WhatsOnWeb has been tested on blind and sighted users by using the Partial
Concurrent Thinking Aloud technique (Federici et al. 2010a, 2010c), an evaluation protocol
that overcomes the limits encountered during the evaluation with blind subjects using
the concurrent and the retrospective verbal protocols. In this usability study, blind sub-
jects showed a better motion ability than sighted people in performing spatial exploration
guided only by auditory cues (Mele et al. 2009; Rugo et al. 2009). Allowing the users with
disabilities an easier mapping of the elements in the interface, the application of the soni-
fication to the web interface significantly improves the access and the use of the systems
(Mele et al. 2010).
From an overall review of the above-mentioned studies, it appears that the sonification
approach seems to be an effective way to transmit spatial information (e.g., graphic or
environmental data). As highlighted by many studies, people with visual disabilities show
spatial capabilities equivalent to sighted users in performing both spatial orientation tasks
and spatial recall tasks. Starting from these evidences, many authors support the “amodal
hypothesis” by explaining the involvement of an amodal system in spatial mapping pro-
cessing of the auditory, haptic, and kinesthetic information of blind people. However,
almost all of the systems mentioned in the previous section have not been developed
under the user-centered design approach by taking into account end-users' needs. Most
of the proposed sonification tools require long training sessions to be efficiently used and
may lead to a cognitive overload.
The role of both user-centered design and the integrated model of interaction evaluation
is today a main point for developing ATs that are mediators that are able to allow users to
overcome their (virtual or physical) environment barriers. In this section, we discussed the
Search WWH ::




Custom Search