Information Technology Reference
In-Depth Information
It should be noted that many of the problems that are identified by heuristic
evaluation may not affect the usability of the system. In addition, the results are
often only presented in negative terms, focusing on what is bad about the design,
instead of also highlighting the things that are good.
13.3.4 Co-operative Evaluation
Co-operative evaluation is another type of expert evaluation method. It was
developed at the University of York (UK), and is related to Scandinavian design
practices (Monk et al. 1993 ; Müller et al. 1997 ). As the name suggests, the
evaluation is carried out co-operatively, with the user effectively becoming part of
the evaluation team. The method is based on the notion that any user difficulties
can be highlighted by two simple tactics:
1. Identifying the use of inefficient strategies by the user (e.g., copy-paste-delete,
rather than cut and paste).
2. Identifying occasions when the user talks about the interface, rather than their
tasks. These are called breakdowns, based on the notion that good tools should
be transparent, so the user should be talking about the task rather than the
technology.
The user is asked to talk aloud as they carry out a series of tasks, and can be
prompted with questions. It is a formative evaluation technique, in that it is used to
gather information about the design as it is being formed. The method can
therefore be used with a working prototype or with the real system.
13.3.5 A/B Testing
A recent trend is to do live testing of multiple interfaces. This is called A/B testing
or bucket testing. In this approach a web service exposes different users to different
interfaces and/or interactions. This can be seen at Google, for example, who was
one of the first Internet sites to use this method extensively to guide their interface
and interaction design decisions. In bucket tests, interfaces can vary in subtle
ways, such as color changes, or may differ substantially, including manipulations
of key functionality. User actions such as clicks (measured as CTR or click
through rates) are studied to see the impact on user behavior, if any, of the
changes.
There are many advantages to these kinds of studies—not least that a test can be
run at scale and while maintaining ongoing business, and that feedback is fast. Of
course, this approach requires building the test interfaces, and having the platform
on which to partition users into conditions and deliver the experiences.
Search WWH ::




Custom Search