Information Technology Reference
In-Depth Information
x Impact: is it easy or hard for a user to recover from and
overcome the problem when it happens?
x Persistence: does the problem happen only once and disappear
when it is known or will it keep causing trouble?
Specific user considerations can be taken into consideration:
differences between novice and experienced users; differences in age,
gender etc. Thus, severity has several components; however, to ease final
prioritizing and recommendations, they are merged into a single figure, i.e.
0-5. Regarding use and value of severity ratings, The Nielsen Norman
Group states:
Severity ratings can be used to allocate the most resources to fix the most
serious problems and can also provide a rough estimate of the need for
additional usability efforts. If the severity ratings indicate that several
disastrous usability problems remain in an interface, it will probably be
unadvisable to release it. But one might decide to go ahead with the release
of a system with several usability problems if they are all judged as being
cosmetic in nature. [25]
In heuristic evaluation each evaluator works independently of others.
This assures unbiased results. At the end of the evaluation inspectors are
allowed to converse with each other, and then results are compiled into a
joint report. Here duplicates are removed and each problem is given one
final severity rating negotiated by all evaluators. One recommendation is
that severity ratings from a single evaluator are not reliable due to the
subjective nature of the method. When ratings are based on the mean
outcome from at least three evaluators, they are considered reliable enough
to be trustworthy [25].
During evaluation, the findings are recorded in a report written by the
evaluator. Another approach is to let the evaluator verbalize results to an
observer (the evaluation manager) who takes notes instead. This may
reduce the load for the evaluator when doing the inspection and thus the
assessment of the interface, and if appropriate, the observer can also be
responsible for the aggregation of all findings (which is further simplified
when the observer is taking part in all evaluation sessions).
Heuristic evaluation is a discount evaluation method; it is cost-
effective, quick, intuitive, easy to learn and simple to administer. In a
survey amongst usability practitioners it was rated as one of the top
methods [26]. Conducting a heuristic evaluation does not require
predefined measures of performance or a flawless system. What is
required is something that explains the system to be evaluated, and that
Search WWH ::




Custom Search