Information Technology Reference
In-Depth Information
7. Response to a questionnaire enabling the participants to verbally explain their
preferences and judgment strategies.
8.6.4. First results obtained
Two main categories of results were obtained at the end of this test compiling
both subjective data (for example, evocation, preference or level of sensation felt)
and objective data (for example, duration of the realization of tasks). One is relative
to the perceived quality of sonification and the other measures the efficiency of the
proposed auditory guidance.
From the point of view of subjective judgment:
- The free verbalization phase (stage 2) enables us to have an idea about the
evocative power of sounds. Thus, in the hybrid library (LIB1), only the Telephone
sound is correctly associated with the function by the majority of participants (74%).
The Images sound is understood by 43% of participants, whilst the Communication
sound suggests functions such as Telephone and Internet linked to the same
functional universe as Communication . The other sounds are rarely associated with
the right function, and the Navigation sound is recognized by none of the
participants. The majority of sounds suggest diverse functions. However, nearly half
of the participants associate the Music sound to an error sound (like those found in
computing environments).
In the museme library (LIB2), no sound is correctly associated with the function
that it represents. The Music sound, which is the most recognized, is recognized by
30% of participants. Most sounds are spontaneously associated with warning or
error sounds.
In summary, we observe that globally, few sounds are freely associated with
their function.
- The sound/function association phase (stage 3) corresponds more than the
previous phase to the manner in which users would discover the system in a real
situation (i.e. by reading items on the screen at the same time as listening). In this
case, the observation made is that after discovery of the model, sounds are judged to
be globally coherent with the function that they represent; in particular hybrid
sounds (LIB1) Communication , Vehicle and Images and the museme sounds (LIB2)
Music and Videos . However, in the hybrid library, half of the participants consider
that the Music sound is not all adapted to the function; and in the museme library, it
is the Images and Communication sounds that are not considered adapted to the
function by half of the participants. Table 8.1 summarizes the general results of this
stage.
Search WWH ::




Custom Search