Database Reference
In-Depth Information
Examining Bias: Qualitative Analysis
The purpose of any type of usability testing is to predict the possible problems that real users will have.
The question to ask yourself is, "Do we think users would have this problem in real life?" closely
followed by "Why or why not?" In other words, the issue is whether one or more forms of bias may be
strong enough to justify reversing the tentative conclusion that the interface has a problem.
You can't eliminate bias, so be vigilant in questioning its effects on the data you're collecting from
usability testing. Although this qualitative analysis method may be overkill for most problems, it can be
helpful if team members disagree about whether a legitimate problem has been found. The steps are
as follows:
List all the sources of bias that might have played a role in getting a particular result from
testing. Consider all the sources listed earlier, although you may decide that there are only one
or two that are relevant to your situation. For example, if you found a particular problem in the
only test that had a non-representative user and someone thinks that the paper prototype
contributed to the confusion, you would have two sources to consider.
1.
2.
Determine the direction of each effect—does the bias act to strengthen your premise that
there's a problem or weaken it? Both effects are possible. For example, if your product is
intended for use by research chemists and one user who was only a chemistry student had
trouble understanding some terminology, that bias acts to weaken the conclusion—more subject
matter expertise might have helped the user succeed. But sometimes the presence of a bias
can strengthen a conclusion rather than weaken it. For example, hand-drawn corrections to a
screen shot stand out. Thus, if you make a hand-drawn correction to something you want users
to notice and they still don't respond to it, you have stronger evidence that there's a problem with
the interface than you'd have gotten by testing it on a computer.
3.
Estimate the magnitude of each effect. Although this is hard to do in absolute terms (and that's
why I'm calling this method a qualitative technique rather than a quantitative one), you might
decide that one source of bias is relatively weak while another is strong. Try to weigh all the
factors before drawing your conclusion about whether the interface has a problem.
From the Field: Bias—A Case Study
My client, Pearson Education, has a Web application called CourseCompass where college instructors
create online courses to supplement their classroom activities. In a usability study, we watched users
complete the registration process using a paper prototype that used screen shots printed in grayscale.
In our scenario, the users were teaching an introductory psychology course and we showed them the
physical textbook they'd be using. (There was also an online version of the topic that accompanied the
course; we had to choose the topic ahead of time because its contents appeared in the paper
prototype). Although the registration and course creation process were the same regardless of the
subject matter, none of the users were psychology teachers—most taught English.
Once users had finished registering and defining some general information for their course, they saw a
page like the one in Figure 13.2 . At this point, they were supposed to access their newly defined course
by clicking the Intro to Psych link to create a syllabus, assignments, and so on. Instead, users clicked
the Modify button (which let them modify the information they had already entered) or the Create a
Course button (which was for creating another course). Most users needed a clue from the facilitator to
realize that the course name, Intro to Psych, was the link to the place where they could create their
course materials.
Search WWH ::




Custom Search