Information Technology Reference
In-Depth Information
During the standard NASA-TLX procedure users carry out pairwise compari-
sons of the six dimensions. In each of the 15 (5 ? 4 ? 3 ? 2 ? 1) comparisons,
users select the dimension that contributed more to workload. Each dimension
receives one point for each comparison where it was greater. The relative weight
for each dimension is then given by the sum of those points, divided by 15 to
normalize it.
Probably the most accurate approach for measuring workload is to use a sec-
ondary task that the user must perform as and when they can (e.g., responding to
visual or auditory signals). For example, at random intervals the user has to push
an 'A' when the number that pops up on the screen is odd and a 'B' when the
number is even. The time and correctness of the response is a measure of how hard
the user is working.
We sometimes find that while two systems give comparable performance
results on the primary task, performance on the secondary task may be very
different., This suggests that one interface is more demanding than the other, i.e.,
where performance on the secondary task is worse, this indicates that the user is
expending more mental effort on the primary task.
13.5.9 Patterns of Usage
Rather than looking at performance on unit or benchmark tasks in a laboratory
setting, you can place prototype versions of your system in real work settings and
observe actual patterns of use, either directly or through videotape. Often you will
find that certain features, including those that have been requested by users, are
very rarely used, e.g., style sheets in Word.
You could also consider instrumenting the user interface or using a general
keystroke logger (e.g., Kukreja et al. 2006 ) to collect (timed) logs of the key-
strokes, and other interactions that the user performs. This data gets logged in what
are sometimes called dribble files. These files can quickly become excessively
large, however, and thus be hard to analyze. They can be used as a way to identify
errors, error recovery, and patterns of use. Note that if you will be collecting data
in this way, you will need ethical approval, which we talk about below.
If you are evaluating a system that has been released into the marketplace, you
can also get some information on patterns of usage by looking at the logs of calls
to customer/technical support services. Note that this data only measures problems
that have been reported, rather than all of the problems. Users are often very
flexible and adaptable and will develop ways of making the system do what they
want it to do, such as workarounds, rather than spend extra time and effort on the
end of a phone line trying to contact technical support to report the problem.
Customer support activity data can be both politically and commercially sen-
sitiveā€”it may allow competitors to see where the problems are with a particular
product. Such data can be very valuable, however, because it does give a good
indication of where the real problems may lie.
Search WWH ::




Custom Search