Civil Engineering Reference
In-Depth Information
heuristic evaluation method is the involvement of several evaluators. Evaluators tend to find different
usability problems and when their findings are aggregated (and duplicates reconciled), they can
account for the majority of the usability flaws. Nielsen typically recommends 3 to 5 evaluators
because there is a point of diminishing returns with a large number of evaluators (Nielsen, 1994).
Evaluators may require a training
familiarization session depending on the target user group and
domain and other aspects of the work system (a definite minus to this method as there is no documented
means by which to train the experts). For example, to evaluate an information kiosk, a walk-up and use
system to be used by the general public, an evaluator would not typically receive training or prompting to
explain the use of the system. In contrast, in the evaluation of a nurses' scheduling tool, a system for
which nurses receive specialized training evaluators should really be provided by giving training for
that system as well as preparation on the subject matter expertise possessed by the nurses. In addition,
it may be useful to provide the evaluators with typical usage scenarios for the design to help them antici-
pate the realistic demands on the design's functionality. To formally account for these factors, Muller et al.
(1995) introduced three additional heuristics to encourage evaluators to consider the context of use
(arguably ignored in the introductions to heuristic evaluation). These heuristics include:
/
1. Respect the user and his or her skills
2. Promote a pleasurable experience with the system
3. Support quality of work
First, each evaluator steps themselves through the interface several times independently, record any
potential and any usability problems they identify with the interface for target users and tasks. Each
usability problem identified should be accompanied by a sufficiently explicatory description. The
more specific evaluators are in the vindication of the issues observed, the better, as it isolates target pro-
blems and their priority in the subsequent redesign activities.
Following the independent evaluation, evaluators' problems and descriptions are aggregated and
duplicates are accounted for. Then evaluators discuss their findings eliminating any duplicate problems
and resolving any issues that may be contradictory. In this discussion, the evaluators often work to form
consensus on the severity levels for each issue. The outcome of the heuristic evaluation is a list of specific
problems and reference to which heuristics are violated and a severity level, which provides guidance as to
which issues take priority in redesign. While the outcomes of heuristic evaluations are not recipes that
explicitly direct redesign activities to achieve “correct” design, solutions that emerge from the evaluation
are often intuitive, because of the heuristics' connection to fundamental usability principles.
A shortcoming of heuristic evaluations is that evaluators are not fully prepared for inspection in terms
of applying the heuristics and relative to the target domain. Direction is typically not provided for the
specific approaches taken up in the validation of design in terms of each heuristic (and even experienced
evaluators may have inconsistent approaches to this). An additional limitation, some argue, is the pro-
pensity for heuristic evaluation to generate several false positive usability issues. This is especially true in
circumstances when the evaluators have inconsistent and unreliable knowledge of the domain and
context (Cockton et al., 2003). In their assessment of heuristic evaluation, Cockton et al. (2003) noted
a trend for evaluators to underestimate users' capabilities in display interactions. Still, the convenience
and efficiency afforded by the heuristic evaluation technique prompts its use in situations requiring a
quick turnaround in the iterative design process and in situations proving impractical or impossible
for the inclusion of end-users directly in the evaluation and iteration stages of UCD.
The following key points summarize the heuristic evaluation technique:
. Prototype requirements: Compatible with a range of fidelity — from paper through operational
systems
. Number of evaluators: 3 to 5, but possibly more in complex design situations.
. Testing environment: Flexible
. Time involved: 2 to 3 h, but longer if the system is highly complex
Search WWH ::




Custom Search