Information Technology Reference
In-Depth Information
you can measure whether users clicking through a website (behavior) found
what they were looking for (goal). You can measure how long it took users to
enter and format a page of text properly in a word-processing application or
how many buttons users pressed in trying to cook a frozen dinner in a micro-
wave. All performance metrics are calculated based on specific user behaviors.
Performance metrics not only rely on user behaviors, but also on the use of
scenarios or tasks. For example, if you want to measure success, the user needs
to have specific tasks or goals in mind. The task may be to find the price of a
sweater or submit an expense report. Without tasks, performance metrics aren't
possible. You can't measure success if the user is only browsing a website aim-
lessly or playing with a piece of software. How do you know if he or she was suc-
cessful? But this doesn't mean that the tasks must be something arbitrary given
to the users. They could be whatever the users came to a live website to do or
something that the participants in a usability study generate themselves. Often
we focus studies on key or basic tasks.
Performance metrics are among the most valuable tools for any usability pro-
fessional. They're the best way to evaluate the effectiveness and efficiency of
many different products. If users are making many errors, you know there are
opportunities for improvement. If users are taking four times longer to complete
a task than what was expected, efficiency can be improved greatly. Performance
metrics are the best way of knowing how well users are actually using a product.
Performance metrics are also useful in estimating the magnitude of a specific
usability issue. Many times it's not enough to know that a particular issue exists.
You probably want to know how many people are likely to encounter the same
issue after the product is released. For example, by calculating a success rate that
includes a confidence interval, you can derive a reasonable estimate of how big a
usability issue really is. By measuring task completion times, you can determine
what percentage of your target audience will be able to complete a task within
a specified amount of time. If only 20% of the target users are successful at a
particular task, it should be fairly obvious that the task has a usability problem.
Senior managers and other key stakeholders on a project usually sit up and pay
attention to performance metrics, especially when they are presented effectively.
Managers will want to know how many users are able to complete a core set of tasks
successfully using a product. They see these performance metrics as a strong indicator
of overall usability and a potential predictor of cost savings or increases in revenue.
Performance metrics are not the magical elixir for every situation. Similar to
other metrics, an adequate sample size is required. Although the statistics will
work whether you have 2 or 100 users, your confidence level will change dra-
matically depending on the sample size. If you're only concerned about identify-
ing the lowest of the low-hanging fruit, such as identifying only the most severe
problems with a product, performance metrics are probably not a good use of
time or money. But if you need a more fine-grained evaluation and have the
time to collect data from 10 or more users, you should be able to derive mean-
ingful performance metrics with reasonable confidence levels.
Search WWH ::




Custom Search