Biomedical Engineering Reference
In-Depth Information
information through the Web is constantly increasing, screen readers show clear limits.
Also, even in those cases in which the output is typically text, such as for the output of a
web search engine, the interaction between a blind person and a computer can become
slow and convoluted because of the many lines of text that a screen reader may need to
read before leading the user to the wanted information. Also in this case, it would be very
useful to adopt techniques such as those described in the previous section that convey a
great amount of information in an intuitive and concise manner. For example, the use of a
visual map of semantic categories could significantly reduce the amount of time needed to
explore the results space and to find the desired information.
The adoption of solutions on the basis of the information visualization may seem impos-
sible for blind people because of their disability. However , many studies agree that the
spatial representation of information is independent from the way in which the sensory
inputs are displayed; in particular, some authors pointed out that blind subjects have a
better performance in processing spatial auditory inputs than sighted people (de Vega
et al. 2001; Avraamides et al. 2004). Indeed, it has been highlighted that blind people show
a motion ability in performing spatial exploration tasks guided by only natural acoustic
cues, functionally equivalent to the visually guided way for sighted people (Bryant 1992).
Even more, the nature of sound seems to be able to communicate the complexity of visual
representations of data (Kramer 1994). Therefore, one can use information visualization
approaches similar to those described in the previous section coupled with sonification
techniques. Sonification is the way to “represent data relations into perceived relations
in an acoustic signal for the purposes of facilitating communication and interpretation”
(Kramer et al. 1997).
16.2.3.2 A Sonification Example
As a case study, we report recent experimental research about the sonification of
WhatsOnWeb (Rugo et al. 2009; Mele et al. 2010). In most of the sonificated (see Chapter
15) systems, priority is usually given to the mapping of the sound attribution to data, but
not to the interactivity with the user. To overcome this limit and to guarantee that the
sonification represents both the interaction design and information, Zhao and colleagues
(2008) provided the Action by Design Component (ADC) framework, a sonification model
designed to permit an active and dynamic navigation into the interaction environment.
For this reason, the ACD framework was chosen as a theoretical background for the sonifi-
cation of WhatsOnWeb, in which the indexed data are organized by semantic correlations
resulting in abstract information.
The sonification of WhatsOnWeb is combined with visual events describing global and
particular browsing information. Although the global information is visualized after
“search” action, the temporization technique provides an increase in the intensity of each
category. From the first to the last ranking organization result is guaranteed to the user
through an overall overview of information that allows the first mental representation of
the framework that users are going to browse. The complexity of the tone of each node is
related with the complexity of its paraverbal information. For example, while browsing a
category, a harmonic chord will be executed, suggesting the semantic links with the other
peaks. A low-latency (less than 100 ms) of short sounds have been used to grant a kind
of active interaction in which sound information processing and keeping does not impli-
cate a short-term memory overload (Atkinson and Shiffrin 1971). Moreover, WhatsOnWeb
browsing is granted by the auditory reiterable feedback, which provides spatial informa-
tion to facilitate user orientation. Indeed, WhatsOnWeb provides the user with a persistent
Search WWH ::




Custom Search