Database Reference
In-Depth Information
Chapter 11: Data: Capturing, Prioritizing, and
Communicating
So far this topic has been about the paper prototype and the people involved in creating and testing it.
But the heart of a usability study is its data—all the stuff that you learn. This chapter looks at some
ways of capturing the data from paper prototype usability tests and what to do with it afterward. The
ultimate purpose of collecting usability data is to make the interface better, so a good method is one
that accomplishes this goal quickly and effectively. Naturally, every company is different, so a technique
that is ideal for one product team may be unsuitable for another.
Capturing the Data (Note-Taking)
I ask all usability test observers to take notes. Not only does this ensure that we capture as much
information as possible for the debriefing meeting (which I discuss in this chapter), but it also helps the
observers focus on what the users are doing.
What Observers Should Look For
For starters, give each observer a copy of the tasks, especially if you've filled in the Notes section of
the template as described in Chapter 6 . This information will help observers understand the purpose of
each task and some of the specific things you want to look for. If all the observers had a direct hand in
creating the paper prototype, they'll already have plenty of their own questions, so you might not have
to brief them on what to look for. However, if there are observers who were less involved, you might
want to fill them in on the issues that arose during your walkthroughs.
But in general, what should observers look for? Cases where users are confused, don't understand a
term, or misconstrue a concept are quite common and worth writing down. User quotes are often
useful. Anything that surprises you is likely to be important because it indicates that users are behaving
differently than you expected—if you expected the users to zig and they zagged instead, there's
something going on at that decision point that you should understand better. And last, but not least, it's
important to note the things that worked well in the interface so that you don't waste time redesigning
something that doesn't really need it.
I occasionally find it helpful to give some observers specific assignments: "Gina, you watch whether
they use context-sensitive help. Ali, you write down every error message the Computer gives them out
loud." I tend to do this when the observers are unfamiliar with the interface (it gives them a productive
way to become involved) or when the team has a lot of specific questions.
Observation, Inference, and Opinion
It sounds like a tautology to say that observers should write down observations, but in my experience
some people don't understand what that means. It's important for observers to record data in a form
that won't be subject to contentious debate later. It's natural for us to filter information through our own
set of ideas and prejudices, but this subjectivity means you're now dealing with opinion rather than
data—and you probably had all the opinions before you got started. So let's look at the differences
between an observation, an inference, and an opinion.
Observations are objective, factual statements of something the user said or did. Observations are the
"what," "when," and "how" of the action, for example, "User chose Landscape orientation," or "User
said, 'I can't tell what it did.'" Observations by their nature are not subject to debate—any two people
who watch the same test should theoretically walk away with the same set of observations. (In practice,
they won't, because it's not possible to write down everything, so each observer chooses a subset of
observations based on perceived importance.) Most of your usability test notes should consist of
observations—what did the users type, click, and say? But observations do not describe the why, in
other words, the causes, of problems or anything that goes on inside users' heads. Those are
inferences.
Search WWH ::




Custom Search