Information Technology Reference
In-Depth Information
may often speed up the interaction for the
expert user such that the system can cater to
both inexperienced and experienced users.
Allow users to tailor frequent actions.
8. Aesthetic and minimalist design: Dialogues
should not contain information which is ir-
relevant or rarely needed. Every extra unit of
information in a dialogue competes with the
relevant units of information and diminishes
their relative visibility.
9. Help users recognize, diagnose, and re-
cover from errors : Error messages should
be expressed in plain language (no codes),
precisely indicate the problem, and construc-
tively suggest a solution.
10. Help and documentation: Even though it
is better if the system can be used without
documentation, it may be necessary to
provide help and documentation. Any such
information should be easy to search, focused
on the user's task, list concrete steps to be
carried out, and not be too large.
be effectively conducted by non-usability
experts (Nielsen, 1994b).
On the whole, heuristic evaluation is consid-
ered to be a cost-effective evaluation method.
Its main strengths lie in providing discovery and
analysis resources (Cockton et al., 2003), such as
domain and system knowledge, where it generally
outperforms other popular inspection techniques
like guideline-based methods or cognitive walk-
through (Wharton et al., 1994).
Limitations of Heuristic Evaluation
Here are some specific limitations of heuristic
evaluation:
Heuristic evaluation is highly dependent
on the skills and experience of the spe-
cific usability expert(s) involved. At a
high level of generality, the heuristics are
“motherhood statements that serve only to
guide the inspection rather than prescribe
it” (Greenberg et al., 1999).
Strengths of Heuristic Evaluation
Participants are not the real users.
Regardless of the experts' skills and expe-
rience, they are still “surrogate users” (i.e.
experts who emulate real users) (Kantner
et al., 1997), therefore the resulting data are
not really representative of the real users.
Here are some strengths of heuristic evaluation:
Its ease of implementation and high effi-
ciency (Law et al., 2002; Nielsen, 1994b).
It is considered to have a good success rate
in that typically only 3-5 usability experts
are needed to detect most (75-80%) of the
usability flaws a system presents (Nielsen,
1994b).
Heuristic evaluation does not fully cap-
ture or take into account the context of use
of the system under evaluation but rather
evaluates it “as a relatively self-contained
object” (Muller et al., 1995).
Its early applicability in the development
lifecycle and low cost: it requires neither
a working prototype nor the real users
(Nielsen, 1994b).
It has been said that the majority of usabil-
ity flaws detected by heuristic evaluation
are 'minor' usability problems (for in-
stance, by Nielsen (1994a)), or false posi-
tives, problems that do not negatively im-
pact user performance or users' perception
of system quality (Simeral and Russell,
1997).
It is becoming part of the standard HCI cur-
riculum and therefore known to many HCI
practitioners (Greenberg et al., 1999.). The
heuristics are well documented and there-
fore easy to learn and put to use, so it may
be argued that heuristic evaluation can also
Search WWH ::




Custom Search