Database Reference
In-Depth Information
Observer-User Interactions: Questions to Avoid
I think it's great for observers to directly ask questions of the users, for the kinds of reasons described
earlier in the " benefits " section—for example, the facilitator can't know all the questions in advance. But
observers need some guidance to understand the difference between a good question and a bad one.
A bad question is one that
Reveals the answer.
Belongs in a focus group.
Asks users to imagine.
Asks users to explain their cognitive process.
Each of these is discussed further.
Questions That Reveal the Answer
For some reason, the way that a question naturally pops into one's mind is usually not the best way to
ask it. For example, "Was it clear to you that Purge would delete all the records?" reveals to the user
what the desired answer is. To avoid doing this, one trick I've learned is to formulate the question
mentally, pay attention to the second half of it (where the true question usually lies), and then turn it
around so that it's nonleading, for example,
Original question: "Was it clear to you that Purge would delete all the records?"
(Hmm, second half says: "Purge deletes all the records." Oops, I don't want to tell them what
Purge does, I want to ask them.)
Reformulated question: "What did you think Purge would do?" Possible follow-up if needed: "And
what did it do?"
Learning to ask questions in this manner takes practice, so it's unfair to expect perfection from anyone,
including the facilitator. If someone accidentally asks a leading question and the user spits back the
encapsulated answer, it just means that you can't draw any conclusions from it. But sometimes the
user will bail you out by saying, "You know, that never occurred to me—what I really thought was ..."
And then you can trust the answer, despite the leading question.
Questions That Belong in a Focus Group
Usability tests of paper prototypes are not an especially good way to gather data about whether users
like a product concept enough to buy it, how much they'd pay for it, and so on. In a usability test, the
primary purpose is to determine how well the interface does what users need it to do, and often you
also end up learning valuable things you didn't know about the users and their requirements. To gather
this kind of information, you need to remain open to all feedback, both positive and negative. Once you
start asking questions like how much they'd pay for the product, the nature of the discussion changes
from "How can we make this better for you?" to "How can we sell this to you?" Users will recognize that
you've put on your sales and marketing hat, and it may affect the nature of the feedback that you get
from that point on. Even if you reserve these questions until the very end, usability testing is an
inefficient way to get this kind of data because you've got one or two users and several product team
members—in a focus group, the ratio is reversed.
Questions That Ask Users to Imagine
Beware of questions that take the form, "What if we did X and Y. Would that be better?" Unless it's
something you're showing to the user, you're asking the user to use his or her imagination to envision
the improvements. But it's hard to know whether the user is envisioning the same thing as you are or
shares your understanding of the terms you're using—every profession has its jargon and high tech has
more than its fair share. To the user, these questions tend to sound like, "If we made it better, would
Search WWH ::




Custom Search