Information Technology Reference
In-Depth Information
touchscreens, buttons) and associated software
approaches (e.g. use of menus, lists, scrolling).
Historically, such paradigms were conceived as a
means of overcoming the significant limitations
of command-line user-interfaces and provided
a what-you-see-is-what-you-get (WYSIWYG)
experience for the user (Shneiderman, 1998). In
the driving context, several studies have shown
that such highly visual-manual user-interfaces
can have a considerable impact on safety (Nowa-
kowski, Utsui and Green, 2000; Tijerina, Palmer
and Goodman, 1998).
As an alternative to such user-interfaces, speech
shows promise as a largely non-visual/manual
input method for navigation systems (Tsimhoni,
Smith, and Green, 2002). Nevertheless, research
has also shown that there is considerable potential
for cognitive distraction with speech interfaces
(Gärtner, König, and Wittig, 2001), and it is
critical that recognition accuracy is very high.
Moreover, designers must provide clear dialogue
structures, familiar vocabulary, strong feedback
and error recovery strategies. These issues are of
particular importance given the potentially large
number of terms (e.g. towns, street names) that
might be uttered and the difficulties that a speech
recognition system can experience with alphabet
spelling (specifically, the 'e-set'—b, c, d, e, g etc.).
Recent research has also shown the potential
for handwriting recognition in a driving context
for menu negotiation and inputting alphanumeric
data (Kamp et al., 2001; Burnett et al., 2005).
Whilst handwriting requires manual input, there
is a reduced cognitive component and it is a
more familiar method for users in contrast with
speech interfaces. Nevertheless, issues relating to
recognition accuracy remain and it is critical to
place a handwriting touchpad in a location that
facilitates the use of a driver's preferred hand
(Burnett et al., 2005).
The difficulties for complex interactions with
vehicle navigation systems are considered to be so
significant that many authors believe that systems
should disable “overly demanding” functionality
when the vehicle is in motion (e.g. by “greying
out” options when the vehicle is moving)—Green
(2003); Burnett, Summerskill and Porter (2004).
This is currently a rich area for research, requiring
an understanding of a) what is meant by “overly
demanding”, b) establishing valid/reliable metrics
for the assessment of demand and finally, c) decid-
ing where to put limits on acceptability (Burnett,
Summerskill and Porter, 2004).
Underload for Vehicle
Navigation Systems
In contrast with the overload perspective, over
the last five years some researchers have viewed
navigation systems as a form of automation, where
underload issues become central. Vehicle naviga-
tion systems calculate a route for a driver according
to pre-defined algorithms. Subsequently, drivers
do not plan a journey to a destination, rather they
confirm a computer-generated route. Systems then
present filtered information during the journey,
often via paced visual and auditory instructions.
Two related concerns are emerging as important
research questions, of particular relevance to
user-interfaces which place a reliance on turn-
by-turn guidance.
Firstly, it has been noted that there may be a
poor calibration in the perceived versus objective
reliability of in-car computing systems (Lee and
See, 2004). This is of relevance as a navigation
system (particularly the underlying digital map)
is unlikely ever to be 100% reliable. Neverthe-
less, drivers, largely based on their accumulated
experience, may believe this to be the case. In
certain situations, such overtrust in a system
(commonly referred to as complacency) may
lead to drivers following inappropriate routes
and potentially making dangerous decisions, for
instance, turning the wrong way down a one-way
street. There is plenty of anecdotal evidence for
such behaviour in the popular press (e.g. http://
Search WWH ::




Custom Search