Database Reference
In-Depth Information
that was working on speech recognition software. The designers were trying to improve the accuracy of
the recognition and were puzzled because the beta test version would do well for users for a while but
then the recognition rate would suddenly deteriorate. They couldn't reproduce the problem in their lab.
When my colleague went to watch the beta users, she noticed that they closed their office doors while
they were doing dictation. When interrupted by a knock or someone opening the door, users would put
the system in "sleep" mode and then talk to their co-worker. In a flash of insight, my colleague realized
that the speech recognition software was trying to interpret the door-knock sounds as words, and that's
what made the recognition accuracy go haywire. In essence, something that was supposed to be
outside the bounds of the system was interfering radically with it.
You can make your test setting more realistic (or at least understand the ways in which it's different) if
you go to the effort to do so. There's a method called contextual inquiry where you set up site visits for
the purpose of understanding what kinds of problems your users face, how they solve them, and the
environment in which they do it. The insights you gain from contextual inquiry might cause you to set up
your test environment differently the next time you conduct a usability study.
Note In creating tasks and setting up the test environment, there is a balance to be struck between
realism and control—to find specific things, sometimes you have to set up your test in a
certain way. A paper prototype is an obviously artificial construct, but any experiment has its
artificial aspects. In a good experiment, the artificial aspects are there deliberately; in a bad
experiment, they're accidental.
Bias: Test Machine
Test machine bias is a subset of test environment bias. When testing on a computer, it's likely that the
computer that the user is accustomed to will be different than your test machine. For example, an
America Online user may navigate to Web sites using their Favorites folder, which obviously you won't
have. Their method (or lack thereof) for filing bookmarks and documents will be different than yours,
and so might their screen resolution, double-click speed, font size, and so on.
I've found that bias introduced by the test machine is usually less problematic than other causes, but
every now and then it can bite you. In one study of a pre-launch version of a Web site, we set the
default home page in the browser to the site we were testing (it was running on a development server,
so we couldn't use the real URL). So this meant that the browser's Home button went to the home
page of the site. We did this as a convenience for ourselves, but some users picked up on this and
used the browser's Home button to get back to the home page of the site instead of the logo link
intended for this purpose. In retrospect, we realized that we'd introduced a bias that made their
navigation artificially easy, and we wished we'd used a bookmark instead (or typed in the URL for them,
since it was too complicated to memorize easily).
You can have "test machine" kinds of problems with a paper prototype too. I've also seen users click
the hand-drawn Home browser button, mistaking it for a link to the home page of the Web site.
Whether paper or machine, problems of this nature are often false ones. When I facilitate usability
tests, I'll step in and help users if I believe the problem they've encountered is an artifact of the test
machine or paper prototype rather than the interface itself. But I'll also discuss it with the team
afterward to make sure everyone agrees it was probably a false problem.
With a paper prototype, you have the option of eliminating some kinds of test machine problems,
assuming they aren't relevant to what you want to test. For example, to get around the fact that the
users' directory structures will all be different, you can artificially simplify your prototype so that when
they go to open a file, the one you've told them to use magically appears without the need to browse for
it. (You may be able to do similar things in software, but sometimes it's more trouble than it's worth.)
Bias: Facilitator
A careless facilitator can contaminate the very data he or she is collecting. Even after 10 years of
experience, I sometimes catch myself asking leading questions. A facilitator's mannerisms and body
language can provide clues to the user, such as nodding before the user makes a choice. (Nodding
afterward, although it also constitutes a bias, is not quite as bad.) As described in Chapter 8 ,
sometimes there are situations when a facilitator will deliberately alter users' behavior, for example, by
suggesting that they look in help. To some extent these human variables are inevitable in usability
Search WWH ::




Custom Search