Information Technology Reference
In-Depth Information
8.4 SUMMARY
Some of the key takeaways from this chapter are as follow.
1. An easy way to combine different usability metrics is to determine the per-
centage of users who achieve a combination of goals. This tells you the over-
all percentage of users who had a good experience with your product (based
on the target goals). This method can be used with any set of metrics and is
understood easily by management.
2. One way of combining different metrics into an overall “usability score”
is to convert each of the metrics to a percentage and then average them
together. This requires being able to specify, for each metric, an appropriate
minimum and maximum value.
3. Another way to combine different metrics is to convert each metric to a z
score and then average them together. Using z scores, each metric gets equal
weight when they are combined. But the overall average of the z scores will
always be 0. This metric is useful in comparing different subsets of the data
to each other, such as data from different iterations, different groups, or dif-
ferent conditions.
4. The SUM technique is another method for combining different metrics, spe-
cifically task completion, task time, errors, and task-level satisfaction rating.
The method requires entry of individual task and participant data for the
four metrics. Calculations yield a SUM score, as a percentage, for each task
and across all tasks, including confidence intervals.
5. Various types of graphs and charts can be useful for summarizing the results
of a usability test in a “usability scorecard.” A combination line and column
chart is useful for summarizing the results of two metrics for tasks in a test.
Radar charts are useful for summarizing the results of three or more metrics
overall. A comparison chart using Harvey Balls to represent different levels
of the metrics can summarize effectively the results for three or more metrics
at the task level.
6. Perhaps the best way to determine the success of a usability test is to com-
pare the results to a set of predefined usability goals. Typically these goals
address task completion, time, accuracy, and satisfaction. The percentage of
users whose data met the stated goals can be a very effective summary.
7. A reasonable alternative to comparing to predefined goals, especially for
time data, is to compare actual performance data to data for experts. The
closer the actual performance is to expert performance, the better.
Search WWH ::




Custom Search