Information Technology Reference
In-Depth Information
4.3.3 Collecting and Measuring Errors
Measuring errors is not always easy. Similar to other performance metrics, you
need to know what the correct action should be or, in some cases, the correct
set of actions. For example, if you're studying a password reset form, you need
to know what is considered the correct set of actions to reset the password suc-
cessfully and what is not. The better you can define the universe of correct and
incorrect actions, the easier it will be to measure errors.
An important consideration is whether a given task presents only a single
error opportunity or multiple error opportunities. An error opportunity is basi-
cally a chance to make a mistake. For example, if you're measuring the usability
of a typical login screen, at least two error opportunities are possible: making
an error when entering the user name and making an error when entering the
password. If you're measuring the usability of an online form, there could be as
many error opportunities as there are fields on the form.
In some cases there might be multiple error opportunities for a task but
you only care about one of them. For example, you might be interested only in
whether users click on a specific link that you know will be critical to complet-
ing their task. Even though errors could be made on other places on the page,
you're narrowing your scope of interest to that single link. If users don't click on
the link, it is considered an error.
The most common way of organizing error data is by task. Simply record the
number of errors for each task and each user. If there is only a single opportunity
for error, the numbers will be 1's and 0's:
0 = No error
1 = One error
If multiple error opportunities are possible, numbers will vary between 0 and
the maximum number of error opportunities. The more error opportunities, the
harder and more time-consuming it will be to tabulate the data. You can count
errors while observing users during a lab study, by reviewing videos after the ses-
sions are over, or by collecting the data using an automated or online tool.
If you can clearly define all the possible error opportunities, another approach
could be to identify the presence (1) or absence (0) of each error opportunity
for each user and task. The average of these for a task would then reflect the inci-
dence of those errors.
4.3.4 Analyzing and Presenting Errors
The analysis and presentation of error data differ slightly depending on whether
a task has only one error opportunity or multiple error opportunities. If each
task has only one error opportunity, then the data are binary for each task (the
user made an error or didn't), which means that the analyses are basically all the
same as they are for binary task success. You could, for example, look at aver-
age error rates per task or per participant. Figure 4.6 is an example of presenting
Search WWH ::




Custom Search