Information Technology Reference
In-Depth Information
6.4 POSTSESSION RATINGS
One of the most common uses of self-reported metrics is as an overall
measure of perceived usability that participants are asked to give after having
completed their interactions with the product. These can be used as an over-
all “barometer” of the usability of the product, particularly if you establish a
track record with the same measurement technique over time. Similarly, these
kinds of ratings can be used to compare multiple design alternatives in a sin-
gle usability study or to compare your product, application, or website to the
competition. Let's look at some of the postsession rating techniques that have
been used.
6.4.1 Aggregating Individual Task Ratings
Perhaps the simplest way to look at overall perceived usability is to take an
average of the individual task-based ratings. Of course, this assumes that you
did in fact collect ratings (e.g., ease of use) after each task. If you did, then
simply take an average of them. Or, if some tasks are more important than
others, take a weighted average. Keep in mind that these data are different
from one snapshot at the end of the session. By looking at self-reported data
across all tasks, you're really taking an average perception as it changes over
time. Alternatively, when you collect self-reported data just once at the end of
the session, you are really measuring the participant's last impression of the
experience.
This last impression is the perception they will leave with, which will likely
influence any future decisions they make about your product. So if you want
to measure perceived ease of use for the product based on individual task per-
formance, then aggregate self-reported data from multiple tasks. However, if
you're interested in knowing the lasting usability perception, then we recom-
mend using one of the following techniques that takes a single snapshot at the
end of the session.
6.4.2 System Usability Scale
One of the most widely used tools for assessing the perceived usability
of a system or product is the System Usability Scale. It was originally devel-
oped by John Brooke in 1986 while he was working at Digital Equipment
Corporation (Brooke, 1996). As shown in Figure 6.8 , it consists of 10 statements
to which users rate their level of agreement. Half the statements are worded
positively and half are worded negatively. A five-point scale of agreement is
used for each. A technique for combining the 10 ratings into an overall score
(on a scale of 0 to 100) is also given. It's convenient to think of SUS scores
as percentages, as they are on a scale of 0 to 100, with 100 representing a per-
fect score.
Search WWH ::




Custom Search