Information Technology Reference
In-Depth Information
domain/visualization). This will enhance the possibility of finding as many
problems as possible.
When deciding how many evaluators to engage, Nielsen recommends
five [4][24] (as discussed above). This assumption is widely embraced. It
is sometimes the reason why heuristic evaluation is chosen instead of
another method, which requires a larger sample of participants (e.g., a
controlled user study). However, Nielsen makes clear that the basis for
using this number is that all five evaluators are skilled usability experts,
i.e. well trained individuals from a homogenous group. Studies from
human-computer interaction testing of web-based applications (e.g.
employee time sheet applications) offer reasons why increasing the
number of evaluators beyond five [46] may be beneficial. They describe
why five are not sufficient for finding the majority of problems [47][48],
stating that the number of evaluators may vary with what is evaluated, and
what type of problems are found [48]. No empirical evidence currently
exists demonstrating that recommendations of either five evaluators, or
any of the new findings, will transfer, and thus be applicable for
information visualization. No work has been done to perform the same
type of studies as described above [46][47][48] in information
visualization. It is up to each researcher to make an educated guess as to
how many participants to engage. One recommendation is to look at
sample sizes in previous research. Another is to monitor the evaluation
process and stop it when adding more participants does not offer new
information. The chances of having engaged the correct evaluators, able to
find most problems, are expected to increase when the group has all
relevant areas of expertise: usability, the visualization technique, the data
domain, graphical design, etc.
Collaboration
Another approach to improve heuristic evaluation is to have evaluators
collaborate in pairs. Traditionally, evaluators perform their sessions
individually, not conferring with each other until they have completed
their evaluation. However, collaboration can be useful. Ultimately,
evaluators base results on their own judgments of heuristics and the
interface they are evaluating. If it is not possible to recruit a group of
double experts, then one expert in usability and one expert in the problem
domain can work together. Such a pair, with two competencies and two
sets of experiences, may discover more problems. Collaboration can
overcome the risk of evaluators becoming bored and unmotivated, which
may be a cause for false alarms and problems being overlooked and
Search WWH ::




Custom Search