Database Reference
In-Depth Information
" But if there are dozens of bulbs in your online catalog, you might not want to prepare a prototype page
for each one. One tactic is to alter the task to constrain the set of possible solutions: "Your neighbor
told you that raccoons won't eat daffodil bulbs—is this true?" That way, you need only the product
page(s) for daffodils. Another tactic is to leave the task as is, and if the user tries to look at a bulb page
you didn't prepare, simply tell them what it would say about edibility. But in either case, you'd still want
to keep a realistic level of complexity in the higher-level pages such as the home page and the category
listings; otherwise the task would be artificially easy.
I recommend that there should always be at least one solution to a task. It's best to avoid so-called "red
herring" tasks where you tell the user to attempt something you know is impossible—many usability
specialists believe that such tasks are not ethical and can result in undue stress on test participants.
(Chances are, they'll have a hard enough time with the things the interface does let them do.) If there is
a reason you need to use this kind of task, tell the users up front that some tasks may not have
solutions and they're allowed to give up.
Has a Clear End Point
It's usually best to avoid overly open-ended tasks such as "Explore this site" or "Find out something
interesting about daffodils." Tasks that lack a clear end point are awkward to facilitate. The users won't
be sure whether they are doing the right thing, and it's hard for you to know when to end the task,
forcing you to jump in at an arbitrary point and interrupt them.
The users, not you, should decide when the task is done. Sometimes users successfully complete a
task but don't realize it because the interface doesn't give them sufficient feedback. It's common for
users to decide on their own to verify the results of their work, especially when using an interface for
the first time. For example, say they completed all the steps needed to configure the network; chances
are, they'll want to ping the devices (send a test message and look for a response) to be sure. If you
stop them too soon, you might miss the fact that users need some additional functionality to verify what
they've just done. (You should also watch for the opposite problem—users might think they're done, but
there's a step they don't know about, or perhaps they've done something incorrectly without realizing it.)
Elicits Action, Not Just Opinion
A good task should cause the user to interact with the interface, not just look at it and tell you what they
think. Often, what users say they like does not reliably indicate what they can successfully use, and
vice versa. It's not that users are lying to us—often they are responding to the concept of a particular
feature, when in reality it may not work the way they envision. For example, several years ago I tested
the Function Wizard in Microsoft Excel. One user got stuck trying to create a function that would
calculate a mortgage payment, and I finally had to help him. When I asked him what he thought of the
Function Wizard, he said, "Now that I know how to use it, I think it's great." Then I gave him the next
task, then the next, and watched him get wrapped around the axle each time. But he never wavered in
his professed liking for the Function Wizard.
Most of the time during a usability test should be spent watching users work with the interface as
opposed to talking about it. If you have opinion-type questions on your list of things you want to know
(e.g., "What do you think of this service?"), it's best to save them until the end of the test session, if you
even ask them at all. In general, usability testing is not an efficient method of soliciting opinions or other
subjective data because you're working with only one or two users at a time rather than the dozens or
hundreds of people you'd have with other methods, such as focus groups or surveys. Depending on
the completeness of the prototype, opinion questions may be even less useful in paper prototype tests
than they are in usability tests of finished products.
I usually avoid opinion questions, although I'll make an exception if someone on the product team has a
burning desire to ask one and he or she understands the limitations I've just described. Often, it's pretty
clear from the users' comments whether they like or dislike something.
Search WWH ::




Custom Search