Information Technology Reference
In-Depth Information
Try to think through as many details as you can before the study. The more
specific you can be, the better the outcome. For example, if you're collecting task
success metrics and completion times, make sure that you define your success
criteria and when exactly you'll turn off the clock. Also, think about how you're
going to record and analyze the data. Unfortunately, we can't provide a single,
comprehensive checklist to plan out every detail well in advance. Every metric
and evaluation method requires its own unique set of plans. The best way to
build your checklist is through experience.
One technique that has worked well for us has been “reverse engineering the
data. This means sketching out what the data will look like before conducting the
study. We usually think of it as key slides in a presentation. Then we work back
from there to figure out what format the data must be in to create the charts.
Next, we start designing the study to yield data in the desired format. This isn't
faking the results but rather visualizing what the data might look like. Another
simple strategy is to take a fake data set and analyze it to make sure that you can
perform the desired analysis. This might take a little extra time, but it could help
save more time when you actually have the real data set in front of you.
Of course, running pilot studies is also very useful. By running one or two
pilot participants through the study, you'll be able to identify some of the out-
standing issues that you have yet to address in the larger study. It's important to
keep the pilot as realistic as possible and to allow enough time to address any
issues that arise. Keep in mind that a pilot study is not a substitute for planning
ahead. A pilot study is best used to identify smaller issues that can be addressed
fairly quickly before data collection begins.
11.5 BENCHMARK YOUR PRODUCTS
User experience metrics are relative. There's no absolute standard for what is
considered “good user experience” and “bad user experience.” Because of this,
it's essential to benchmark the user experience of your product. This is done
constantly in market research. Marketers are always talking about “moving the
needle.” Unfortunately, the same is not always true in user experience. But we
would argue that user experience benchmarking is just as important as market
research benchmarking.
Establishing a set of benchmarks isn't as difficult as it may sound. First, you
need to determine which metrics you'll be collecting over time. It's a good prac-
tice to collect data around three aspects of user experience: effectiveness (i.e., task
success), efficiency (i.e., time), and satisfaction (i.e., ease-of-use ratings). Next,
you need to determine your strategy for collecting these metrics. This would
include how often data are going to be collected and how the metrics are going
to be analyzed and presented. Finally, you need to identify the type of partici-
pants to include in your benchmarks (broken up into distinct groups, how many
you need, and how they're going to be recruited). Perhaps the most important
thing to remember is to be consistent from one benchmark to another. This
Search WWH ::




Custom Search