Civil Engineering Reference
In-Depth Information
TABLE 7.9 Methods Used in User-Observations to Assess Usability
Observation
Method
Summary
Advantage
Challenges
Empirical
evaluation in
controlled user
testing (Dix
et al., 1998)
No contact between evaluator
and user except the
provision of instructions;
collects user verbalization,
interaction events, post test
evaluations of the system to
support a particular
hypothesis
These techniques allows for the
most natural interaction
setting for the user without
interruption; time
measurements are valid
A session can end prematurely
if the user cannot
accomplish the task
Assisted
evaluation
(Maguire, 2001)
Contact is made only in
situations where the user is
baffled in the use of the
system and the possible
actions are not clear
Hiccups in the prototype will
not end the evaluation;
prototype fidelity can be
relatively incomplete
Can induce unintended
workload on the user to have
someone seated; it can be
easy to include encouraging
or discouraging statements
to the user by mistake
Think aloud
protocol
(Rubin, 1994)
Participants are encouraged to
provide their own verbal
commentary about their
interactions. This includes
feelings, questions, and
motivations; evaluators do
not ask questions, but do
remind the users to think
aloud if they forget
Simultaneous collection of
preference and performance
data; users may be more
focused and directed in their
role; can identify what
triggers confusion before it
implicates into a bigger
problem
Unnatural and some users may
not be highly articulate;
interferes with their
performance (in negative or
positive ways); distraction or
fatigue is likely after a few
hours
Remote testing
(Dumas, 2003)
Evaluator and user are
geographically separated
from each other
Testing occurs in a familiar
environment to the user;
testing occurs with the actual
equipment and contextual
cues; reduced costs from not
traveling
Unreliable conferencing
software; company firewalls
can prevent live tests
methods is subject to changes — based on the given constraints of a study. In a perfect world, evaluators
would apply a combination of these techniques to account for as many design problems as possible. What
has become more feasible, however, has been the hybridization of different techniques to improve on
the benefits generated by each [e.g., “heuristic walkthroughs”; see Cockton et al. (2003); Cockton and
Woolrych (2001). The challenge for evaluators, designers, and developers is how to best match the evalu-
ation method with the identified goals of the design within the organizational constraints of funding,
lead-time, and availability of evaluators.
7.4 Conclusions
UCD is just not a methodology for design; its a philosophy under which the entire organization and
design
development process must operate in order to realize the goals of the new system. There are
organizational-wide implications in terms of the commitment to, support for, and acceptance of the
UCD process. Rubin (1994) has identified common characteristics of organizations, which successfully
carry out UCD. These include:
/
. Use a phase-based approach for development that integrates incremental evaluations from user and
expert feedback at critical stages in the design process
. Utilize multidisciplinary teams to provide the variety of skills, knowledge, and information about
the target users and their activities, including engineering, marketing, training, interface design,
human factors, and multimedia
Search WWH ::




Custom Search