Information Technology Reference
In-Depth Information
Insomesituationsitmightbeinterestingtoknowwhichtasksfallaboveor
below a threshold. For example, for some tasks, an error rate above 20% is
unacceptable, whereas for others, an error rate above 5% is unacceptable.
The most straightforward analysis is to first establish an acceptable thresh-
old for each task or each participant. Next, calculate whether that specific
task's error rate or user error count was above or below the threshold.
Sometimesyouwanttotakeintoaccountthatnotallerrorsarecreated
equal. Some errors are much more serious than others. You could assign
a severity level to each error, such as high, medium, or low, and then cal-
culate the frequency of each error type. This could help the project team
focus on the issues that seem to be associated with the most serious errors.
4.3.5 Issues to Consider When Using Error Metrics
Several important issues must be considered when looking at errors. First, make
sure you are not double counting errors. Double counting happens when you
assign more than one error to the same event. For example, assume you are
counting errors in a password field. If a user typed an extra character in the pass-
word, you could count that as an “extra character” error, but you shouldn't also
count it as an “incorrect character” error.
Sometimes you need to know more than just an error rate; you need to know
why different errors are occurring. The best way to do this is by looking at each
type of error. Basically, you want to try to code each error by type of error. Coding
should be based on the various types of errors that occurred. For example, con-
tinuing with the password example, the types of errors might include “miss-
ing character,” “transposed characters,” “extra character,” and so on. At a higher
level, you might have “navigation error,” “selection error,” “interpretation error,”
and so on. Once you have coded each error, you can run frequencies on the error
type for each task to better understand exactly where the problems lie. This will
also help improve the efficiency with which you collect error data.
In some cases, an error is the same as failing to complete a task—for example,
with a login page that allows only one chance at logging in. If no errors occur
while logging in, it is the same as task success. If an error occurs, it is the same
as task failure. In this case, it might be easier to report errors as task failure. It's
not so much a data issue as it is a presentation issue. It's important to make sure
your audience understands your metrics clearly.
Another enlightening metric can be the incidence of repeated errors—namely
the case where a participant makes essentially the same mistake more than once,
such as repeatedly clicking on the same link that looks like it might be the right
one but isn't.
4.4 EFFICIENCY
Time on task is often used as a measure of efficiency, but another way to mea-
sure efficiency is to look at the amount of effort required to complete a task. This
Search WWH ::




Custom Search