Information Technology Reference
In-Depth Information
causing a kind of saturation of the visual channel. It is therefore necessary to use
other modalities (auditory, haptic, etc.) to convey information to the user. It is not
only a question of unloading the visual modality by replacing visual information by
transposition to the auditory domain. It is also a matter of understanding and using
the specificities of the auditory modality in order to create complementarities with
the visual modality. Among the specificities of our auditory system there is, for
example, a weaker reaction and integration time than for the visual channel, a
panoramic perception at 360°, the impossibility of “closing ones ears”, etc. It is thus
necessary to take the specificities in the sonification of HMIs into account.
This approach is all the more relevant in the context of automobile driving,
which requires visual attention to be focused on the road scene. As numerous studies
show, one of the main causes of road accidents is linked to a lapse in the attention of
the driver, which can be caused by a distraction, inattention or interference due to a
secondary task (for a detailed review of the literature, please refer to [LEM 08]). The
rapid expansion of new technologies means that we are finding ourselves
increasingly involved in actions exterior to driving: making phone calls, listening to
music or navigating menus on the onboard computer. It is this last point that we
have addressed in this chapter. Indeed navigation in the onboard computer requires
us to look at the control screen in the dashboard, which carries an obvious risk since
when the driver is looking at the screen he is no longer looking at the road.
Our objective was therefore to develop a navigation principle involving search
“by ear”, by proposing a sonification of actions and items of the onboard computer
menu of an automobile. State of the art enabled us to highlight the main principles of
auditory creation for the sonification of hierarchical menus. The works presented
can be grouped according to three approaches:
- The auditory representation of the position of an item within the hierarchy of
the menu thanks to earcons, i.e. abstract auditory signals synthesized by auditory
parameters defined previously. This method is very efficient as it is based on direct
mapping of the hierarchical relations on the different auditory parameters, but does
not enable us to represent the semantic content of the item.
- The semantic representation of the item or family of items thanks to vocal
synthesis or auditory icons, i.e. sounds that come from our daily environment and
represent the item directly or metaphorically. These sounds are, in principle, very
intuitive and require very little learning.
- A mixed approach that combines the advantages of the first two approaches.
Few works deal with this, but it is this approach that we have chosen to develop our
sonification model.
Search WWH ::




Custom Search