Information Technology Reference
In-Depth Information
To analyze awareness-usefulness gaps,
you must have both an awareness and
usefulness metric. We typically ask users
about awareness as a yes/no question, for
example, “Were you aware of this func-
tionality prior to this study (yes or no)?”
Then we ask, “On a 1 to 5 scale, how use-
ful is this functionality to you (1 = Not at
all useful; 5 = Very useful)?” This assumes
that they have had a couple of minutes
to explore the functionality. Next, you
will need to convert the rating-scale data
into a top-2-box score so that you have
an apples-to-apples comparison. Simply
plot the percentage of users who are aware
of the functionality next to the percent-
age of users who found the functionality
useful (percent top-2 box). The difference
between the two bars is called the aware-
ness-usefulness gap (see Figure 6.27 ).
Awareness/Usefulness Gaps for Five Features
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Feature 1
Feature 2
Feature 3Feature 4Feature 5
Awareness
Usefulness
Figure 6.27 Data from a study looking at awareness-usefulness gaps.
Items with the greatest difference between awareness and usefulness
ratings, such as Tasks 2 and 5, are those you should consider making
more obvious in the interface.
6.8 SUMMARY
Many different techniques are available
for getting UX metrics from self-reported data. Here's a summary of some of the
key points to remember.
1. Consider getting self-reported data at both a task level and at the end
of a session. Task-level data can help you identify areas that need
improvement. Session-level data can help you get a sense of overall
usability.
2. When testing in a lab, consider using one of the standard question-
naires for assessing subjective reactions to a system. The SUS has been
shown to be robust even with relatively small numbers of participants
(e.g., 8-10).
3.
When testing a live website, consider using one of the online services
such as WAMMI or ACSI. The major advantage they provide is the abil-
ity to show you how the results for your site compare to a large number
of sites in their reference database.
4.
Be creative but also cautious in the use of other techniques in addi-
tion to simple rating scales. When possible, ask for ratings on a given
topic in several different ways and average the results to get more con-
sistent data. Carefully construct any new rating scales. Make appro-
priate use of open-ended questions and consider techniques such as
checking for awareness or comprehension after interacting with the
product.
Search WWH ::




Custom Search