Database Reference
In-Depth Information
the incidence and commonality of problems identified, and (2) usability professionals did not agree in
their ratings of problem severity." My colleague Rolf Molich, who coordinated some experiments known
as the Comparative Usability Evaluation (CUE) studies, found much the same thing ( Molich et al.,
1998 , 1999 ). It turns out that it's difficult for any two people to report the same set of usability problems
or assign them the same severity. As a profession we are still working on this challenge, but in the
meantime the implication is that the person conducting and/or reporting on a paper prototype study may
have more effect on its findings than does the technique itself. (This is one reason why I encourage
product team members to observe as many usability tests as they can—they keep me honest.)
On the other hand, a bias in analysis and reporting isn't always a bad thing. As a consultant, I
sometimes deliberately investigate and report problems that I know my clients are interested in while
downplaying others. For example, on one Web site I tested, the client knew that users were leaving the
site after seeing the search results page. This problem was costing them money. In the usability study,
I made every effort to determine the causes of this problem. Along the way I found many other
problems, some of which I didn't even bother to report because fixing them would have amounted to
rearranging deck chairs on the Titanic. If I had known nothing about the company's business priorities,
my findings would have been less biased, but they also would have been less useful.
Bias: Observers
Awareness of being observed can change a person's behavior. I'm not a social psychologist, but I can
describe some of the effects I've witnessed. Some users may be more nervous, less likely to explore,
or more likely to provide favorable feedback compared with how they'd respond in real life. I've also
seen users who clearly liked being "on stage" and responded in exactly the opposite manner. These
are just a few of the possible effects.
As covered in Chapters 9 and 10 , the things you do to prepare users for the test setting go a long way
toward making them feel at ease, and proper preparation of the observers reduces the possibility that
they will cause negative effects. I'm not really very concerned about the possibility of users being less
willing to offer negative feedback in the presence of observers—in my experience properly briefed
users understand that they provide value by speaking up when something's not working for them, and
they do so.
One of the questions people sometimes have about observer bias is how aware the users are of the
observers' body language—nodding, scribbling notes, frowning, and so on. Naturally, if you're sitting in
a roomful of people who are blatantly reacting to everything you say or do, it's going to change your
behavior. In practice, I've found that the secret is to keep the users' attention focused on the prototype.
The more engaged that users are in working with the prototype (and to some extent, the Computer and
facilitator), the more overt action it will take to distract their attention.
I remember one example vividly. The two users were discussing what to do next. One user said,
"Maybe we should click Apply." Out of the corner of my eye, I saw all the observers (who were seated
within the users' peripheral vision) nodding their heads vigorously. I had asked them not to talk, and
they were following my instructions to the letter, but they were understandably rooting for their interface
to do well. However, the users were completely oblivious to this clue because they were looking at the
paper prototype and talking to each other. They went on to try something other than what the observers
had wanted.
Bias: Bugs
Bugs are yet another form of bias. I was testing a Web application for data security where users had to
create combinations of security methods, called a policy, that were then used to protect internal data
resources. When users named the policy, we were curious to see whether they named it after the
security methods it used (Password and Retinal Scan) or after the thing they were trying to protect
(Recipe for Special Sauce). After one test, a bug somehow crept in that left us unable to delete the
policy the users had created. So the users in the next test saw the policy left over from the previous
test, and they gave theirs a name that was similar. This constituted a bias—we couldn't be sure
whether they would have chosen the same name in the absence of seeing what the previous users did.
We had to ignore the data from these users on this issue.
"Bugs" can happen in a paper prototype too, when the Computer makes an error. As with software
Search WWH ::




Custom Search