Database Reference
In-Depth Information
Explanation
Direction of
Effect
Magnitude
We observed that users tried clicking the "Manage" and
"Create a Course" buttons before clicking any of the links.
The buttons seemed to draw the users' attention more
strongly than the link. Some users even said that they
noticed the buttons first and figured that one of them had
to be the right way to proceed. (If we'd tested a hand-
drawn prototype with a sloppy link and neat buttons, then
that might have been yet another bias.)
Supports the
premise that
the interface
had a problem.
Relatively strong,
especially in light
of what users
said and did.
The team's conclusion? Yes, the lack of color in the paper prototype might have contributed to the
problem, but probably not enough to reverse our conclusion that we'd found a problem with the link
versus buttons. It's interesting to note that of the four sources of potential bias we discussed, only
one—the lack of color—was specific to the paper prototype. The other three sources of bias—
including the visual design, which the team ultimately agreed was most important—would all have been
present if we'd been testing on a computer.
Do you need to do this kind of detailed analysis for every problem that arises in usability testing? Of
course not. In my experience, most problems found in testing paper prototypes are quite believable
(sometimes blatant) and there's consensus among the product team that they're real.
But when someone asks, "But wouldn't it have been different on a computer?" my response is to ask
them to explain why they think it might have been different. When there's a specific reason, such as "It
took the Computer too long to find that screen, so the user forgot what he'd clicked," I'm inclined to
agree that we might have found a false problem. But when the rationale can't be articulated beyond a
vague, "I just think it might have been different," then I'm less willing to simply dismiss the problem.
Similarly, an elaborate explanation of how the users were supposed to behave may indicate wishful
thinking.
Whenever someone doubts the results from usability testing, it's prudent to double-check whether the
user fit the profile and if there were any problems with the task or the way it was facilitated. Sometimes
a seemingly free-floating anxiety about the results from a paper prototype test is valid but is caused by
somethingother than the prototype. In other words, people can tell that something smells fishy before
they've learned to identify which type of fish it is.
Note By this point I may have given you so many things to worry about that you'll avoid usability
testing altogether out of fear of doing it wrong! But take heart—imperfect usability studies are
all we've got, and they're far better than nothing. And because good usability professionals
are by nature both analytical and empathetic, help is available for the asking via professional
organizations and discussion lists.
Search WWH ::




Custom Search