Information Technology Reference
In-Depth Information
design that received the best ratings was Template 1, and the design that received
the worst ratings was Template 4. This study also illustrates another common
technique in studies that involve a comparison of alternatives. Participants were
asked to rank-order the five templates from their most preferred to least pre-
ferred. In this study, 48% of the participants ranked Template 1 as their first
choice, while 57% ranked Template 4 as their last choice.
6.7.3 Open-Ended Questions
Most questionnaires in usability studies include some open-ended questions in
addition to the various kinds of rating scales that we've discussed in this chapter.
In fact, one common technique is to allow the user to add comments related to
any of the individual rating scales. Although the utility of these comments to the
calculation of specific metrics may be limited, they can be very helpful in iden-
tifying ways to improve the product.
Another flavor of open-ended question used commonly in usability studies
is to ask the users to list three to five things they like the most about the product
and three to five things they like the least . These can be translated into metrics by
counting the number of instances of essentially the same thing being listed and
then reporting those frequencies. Of course, you could also treat the remarks that
participants offer while thinking aloud as these kinds of verbatim comments.
Entire topics have been written about analyzing these kinds of verbatim
responses using what's generally called text mining (e.g., Miner et  al., 2012),
and a wide variety of tools are available in this space (e.g., Attensity, Autonomy,
Clarabridge, to name a few). We will just describe a few simple techniques for
collecting and summarizing these kinds of verbatim comments.
Summarizing responses from open-ended questions is always a challenge. We've
never come up with a magic solution to doing this quickly and easily. One thing
that helps is to be relatively specific
in your open-ended questions. For
example, a question that asks par-
ticipants to describe anything they
found confusing about the interface
is going to be easier to analyze than
a general “comments” field.
One very simple analysis method
that we like is to copy all of the ver-
batim comments in response to
a question into a tool for creating
word clouds, such as Wordle.net.
For example, Figure 6.25 shows a
word cloud of responses to a ques-
tion asking participants to describe
anything they found particularly
Figure 6.25 Word cloud created with Wordle.net of responses in an online study
of the NASA website about the Apollo Space Program to a question asking for
anything they found particularly frustrating or challenging about the site.
Search WWH ::




Custom Search