Information Technology Reference
In-Depth Information
We've distilled the different sources of bias in a usability study into seven gen-
eral categories:
Participants: Your participants are critical. Every participant brings a certain
level of technical expertise, domain knowledge, and motivation. Some par-
ticipants may be well targeted and others may not. Some participants are
comfortable in a lab setting, whereas others are not. All of these factors make
a big difference in what usability issues you end up discovering.
Tasks: The tasks you choose have a tremendous impact on what issues are
identified. Some tasks might be well defined with a clear end state, others
might be open ended, and yet others might be self-generated by each partici-
pant. The tasks basically determine what areas of the product are exercised
andthewaysinwhichtheyareexercised.Particularlywithacomplexprod-
uct, this can have a major impact on what issues are uncovered.
Method: The method of evaluation is critical. Methods might include tradi-
tional lab testing or some type of expert review. Other decisions you make
are also important, such as how long each session lasts, whether the partici-
pant thinks aloud, or how and when you probe.
Artifact: The nature of the prototype or product you are evaluating has a huge
impact on your findings. The type of interaction will vary tremendously
whether it is a paper prototype, functional or semifunctional prototype, or
production system.
Environment: The physical environment also plays a role. The environment
might involve direct interaction with the participant, indirect interaction via
a conference call or behind a one-way mirror, or even at someone's home.
Other characteristics of the physical environment, such as lighting, seating,
observers behind a one-way mirror, and videotaping, can all have an impact
on the findings.
Moderators: Different moderators will also influence the issues that are
observed. A UX professional's experience, domain knowledge, and motiva-
tion all play a key role.
Expectations: NorgaardandHornbaek(2006)foundthatmanyusabilitypro-
fessionals come into testing with expectations on what are the most prob-
lematic areas of the interface. These expectations have a significant impact on
what they report, often times missing many other important issues.
An interesting study that sheds some light on these sources of bias was con-
ductedbyLindgaardandChattratichart(2007).Theyanalyzedthereportsfrom
thenineteamsinCUE-4whoconductedactualusabilitytestswithrealusers.
They looked at the number of participants in each test, the number of tasks used,
and the number of usability issues reported. They found no significant correla-
tion between the number of participants in the test and the percentage of usabil-
ity problems found. However, they did find a significant correlation between the
number of tasks used and the percentage of usability problems found ( r = 0.82,
p < 0.01). When looking at the percentage of new problems uncovered, the cor-
relation with the number of tasks was even higher ( r = 0.89, p < 0.005). As
LindgaardandChattratichart(2007)concluded,theseresultssuggest“thatwith
Search WWH ::




Custom Search