Information Technology Reference
In-Depth Information
whether the design effort between each
iteration is addressing the most impor-
tant usability issues.
16
14
12
10
5.4.2 Frequency of Issues Per
Participant
It can also be informative to look at the
number of (nonunique) issues each par-
ticipant encountered. Over a series of
design iterations, you would expect to
see this number decreasing along with
the total number of unique issues. For
example, Figure 5.4 shows the average
number of issues encountered by each
participant for three design iterations. Of
course, this analysis could also include
the average number of issues per partici-
pant broken down by severity level. If the
average number of issues per participant
is steady over a series of iterations, but the
total number of unique issues is declin-
ing, then you know there is more consis-
tency in the issues that the participants
are encountering. This would indicate
that the issues encountered by fewer par-
ticipants are being fixed, whereas those
encountered by more participants are not.
8
6
4
2
0
Design 1
Design 2
Design 3
Low
Medium
High
Figure 5.3 Example data showing the number of unique usability issues
by design iteration, categorized by severity rating. The change in the
number of high-severity issues is probably of key interest.
7
6
5
4
3
2
1
0
Design 1
Design 2
Design 3
5.4.3 Frequency of Participants
Another useful way to analyze usability
issues is to observe the frequency or per-
centage of participants who encountered a specific issue. For example, you might
be interested in whether participants correctly used some new type of navigation
element on your website. You report that half of the participants encountered a
specific issue in the first design iteration and only 1 out of 10 encountered the
same issue in the second design iteration. This is a useful metric when you need
to focus on whether you are improving the usability of specific design elements
as opposed to making overall usability improvements.
Figure 5.4 Example data showing the average number of usability issues
encountered by participants in each of three usability tests.
With this type of analysis, it's important that your criteria for identifying specific
issues are consistent between participants and designs. If a description of a specific
issue is a bit fuzzy, your data won't mean very much. It's a good idea to explicitly
document the issue's exact nature, thereby reducing any interpretation errors across
participants or designs. Figure 5.5 shows an example of this type of analysis.
Search WWH ::




Custom Search