Database Reference
In-Depth Information
when learning a new interface but then don't when they're testing a paper prototype. There may be a
social effect here, in other words, not wanting to make the Computer do extra work. Or perhaps the
Computer's slower reactions simply make rapid explorations impractical.
Whatever the cause, a paper prototype may sometimes inhibit the user from "banging on" or exploring
an interface the way they normally would. However, it's difficult to predict what effect this has on
usability testing results. On the one hand, if the user is working more deliberately, he or she may think
through a problem more carefully and end up solving it. On the other hand, if the user is deprived of the
additional information gained by experimentation, he or she may fail a task in a usability test setting but
succeed in real life.
I do believe that paper prototyping can reduce the opportunities for serendipitous discovery. In tests of
Web sites, I have watched users stumble upon the drop-down arrow next to the Back button in Internet
Explorer (that gives access to the 10 most recent pages), comment that they've never noticed that
feature before, and then go on to use it successfully. That's serendipity (and also, possibly, a bias due
to the test machine if the browser they're accustomed to lacks that feature) and I don't see it when
testing paper prototypes. However, I don't believe that reducing serendipity constitutes much of a
problem in usability tests. In fact, if the difference between success and failure is determined by a
fortunate accident, that's usually a good indication that you have plenty of bigger problems to worry
about.
Liking (or Disliking) the Paper Prototype
A paper prototype can indeed affect users' perceptions of the interface, but as with timing it's hard to be
certain which way the effect goes or even whether it is important. Naturally, if we ask people which
prototype looks more attractive or professional, we would expect them to choose the variation that
appears to be finished. In their paper, Hong, Li, Lin, and Landay (2001) confirmed that "formal
representations of design were perceived to be very professional, close to finished, and less likely to
change. Informal representations were perceived to be unprofessional, incomplete, and very likely to be
changed."
Although it's tempting to assume that people will like the entire product better because its prototype
looks more polished, that's not necessarily the case—Hong and colleagues went on to say, "Both
formal and informal representations were rated similarly functional." And the Catani and Biers paper
succinctly notes, "There was no significant difference on any of the 15 subjective questions as a
function of prototype fidelity." Apparently, the users in these studies were able to perceive the same
capabilities in an interface even when a prototype had a low-fidelity appearance.
It's also possible for people to like a paper prototype more than they will like the real product. Wiklund,
Thurrott, and Dumas (1992) examined the relationship between the aesthetic refinement of a prototype
and its perceived usability. They created four versions of an electronic dictionary that varied in how
realistically they represented the appearance of the product. Participants rated the prototypes on a
variety of scales, including ease of use and ease of learning, both before and after using them. The
degree of realism didn't affect these ratings. They also had participants use the real device and provide
the same ratings. But because the prototypes hadn't accurately represented the slow response times
for some aspects of the real product, the usability ratings for the prototypes were more positive than
those for the real thing.
There's also the question of whether you even want the users' subjective opinions. Unlike other
techniques, such as focus groups or surveys, in usability testing there is more emphasis placed on
what users are able to accomplish and less on their opinions. We don't just show them the interface
and ask if they like it; we give them a task and watch them do it. Although we care what users like, we
need to beware of confusing users' opinions with whether the interface actually gets the job done for
them. This sort of disparity between perception and reality is quite common in usability testing. Chapter
6 described an example of a user who kept saying that he liked the Excel Function Wizard even though
he wasn't able to use it without my help. This user may have been responding to the idea of the
Function Wizard as something that would help him create formulas, but the reality was that the existing
design didn't work for him.
The bottom line: Asking people if they like a paper prototype is probably not a useful exercise. It's very
hard to discern exactly what they are liking or disliking: the appearance, the perceived capability, or the
experience of using the interface. Because it is so difficult to separate these effects, I rarely ask
Search WWH ::




Custom Search