Information Technology Reference
In-Depth Information
much better idea of how much faith or confidence to place in the data. Without
confidence intervals, deciding whether some differences are real is pretty much
a wild guess, even what may appear to be big differences.
No matter what your data show, show confidence intervals whenever pos-
sible. This is especially important for relatively small samples (e.g., less than
20). The mechanics of calculating and presenting confidence intervals is pretty
simple. The only thing you need to pay attention to is the type of data you are
presenting. Calculating a confidence interval is different if data are continuous
(such as completion time) or binary (such as binary task success). By showing
the confidence intervals, you can (hopefully) explain how the results generalize
to a larger population.
Showing your confidence goes beyond calculating confidence intervals. We
recommend that you calculate p values to help you decide whether to accept or
reject your hypotheses. For example, when comparing average task completion
times between two different designs, it's important to determine whether there's
a significant difference using a t test or ANOVA. Without running the appropri-
ate statistics, you just can't really know.
Of course, you shouldn't misrepresent your data or present it in a misleading
way. For example, if you're showing task success rates based on a small sample
size, it might be better to show the numbers as a frequency (e.g., six out of eight)
as compared to a percentage. Also, use the appropriate level of precision for your
data. For example, if you're presenting task completion times, and the tasks are
taking several minutes, there's no need to present the data to the third decimal
position. Even though you can, you shouldn't.
11.9 DON'T MISUSE METRICS
User experience metrics have a time and a place. Misusing metrics has the poten-
tial of undermining your entire UX program. Misuse might take the form of
using metrics where none are needed, presenting too much data at once, mea-
suring too much at once, or over-relying on a single metric.
In some situations it's probably better not to include metrics. If you're just
looking for some qualitative feedback at the start of a project, metrics might not
be appropriate. Or perhaps the project is going through a series of rapid design
iterations. Metrics in these situations might only be a distraction and not add
enough value. It's important to be clear about when and where metrics serve a
purpose. If metrics aren't adding value, don't include them.
It's also possible to present too much UX data at once. Just like packing for a
vacation, it's probably wise to include all the data you want to present and then
chop it in half. Not all data are equal. Some metrics are much more compel-
ling than others. Resist the urge to show everything. That's why appendices were
invented. We try to focus on a few key metrics in any presentation or report. By
showing too much data, the most important message is lost.
Search WWH ::




Custom Search