Information Technology Reference
In-Depth Information
between the two ears of 180). From the observer's point of view, the location of the
sound source is now ambiguous, because the waveform at the right ear might be
either a half-cycle behind that at the left ear or a half-cycle ahead. Head movements
or movement of the sound source may resolve this ambiguity, so that there is no
abrupt upper limit in our ability to use phase differences between the two ears.
However, when the wavelength of the sound is less than the path difference between
the two ears, the ambiguities increase; the same phase difference could be produced
by a number of different source locations.
There are two different mechanisms for sound localization: one operates best at
high frequencies and the other at low frequencies. For middle frequencies neither
mechanism operates efficiently, and errors are at a maximum. Stevens and
Newman ( 1936 ) investigated localization of single bursts with smooth onsets and
offsets for observers on the roof of a building so that reflection was minimized.
The listeners had to report the direction of the source in the horizontal plane, to the
nearest 15. Although left-right confusions were rare, low frequency sounds in
front were often indistinguishable from their mirror location behind. If these front-
back confusions were discounted, then the error rate was low at very low and very
high frequencies and showed a maximum for mid-range frequencies (around
3,000 Hz). Intensity differences are more important at high frequencies, and phase
differences provide usable cues for frequencies below about 1,500 Hz.
4.6.4 Discriminating Sounds
Our abilities in discriminating sound depend upon whether we mean absolute dis-
crimination or relative discrimination (this applies to vision too). Absolute dis-
crimination is quite poor (e.g., systems should not rely on remembering a tone or
sound), but relative discrimination is very good. With sounds we can remember no
more than five to seven items for absolute discrimination unless we can attach
meaning to them, such as pitch labels. Also, as we vary more of the dimensions of the
stimulus (increasing its complexity), so we increase our ability to discriminate (up to
150 sounds—varying in frequency, rhythm, location, duration, volume, etc.).
4.6.5 Implications for System Design
There are two ways in which sounds are used in current systems. The first is to
provide voice output. The second is to provide audible
alerts, such as telephone
ring tones, and audible alarms.
Voice outputs generally require more processing than plain sounds. They can
convey much more information, however, and they are particularly important for
people with impaired vision. Blind users, for example, use voice output with
screen readers so they can process the text shown on a display screen.
Search WWH ::




Custom Search