Information Technology Reference
In-Depth Information
usability study with a large population can also be very useful. If you don't have
access to the technology to run A/B tests or online studies, we recommend using
e-mail and online surveys to get feedback from as many representative partici-
pants as you can.
3.3.10 Comparing Alternative Designs
One of the most common types of usability studies involves comparing more
than one design alternative. Typically, these types of studies take place early in
the design process, before any one design has been fully developed. (We often
refertotheseas''designbakeoffs.'')Differentdesignteamsputtogethersemi-
functional prototypes, and we evaluate each design using a predefined set of
metrics. Setting up these studies can be a little tricky. Because the designs are
often similar, there is a high likelihood of a learning effect from one design to
another. Asking the same participant to perform the same task with all designs
usually does not yield reliable results, even when counterbalancing design and
task order.
There are two solutions to this problem. You can set up the study as purely
between subjects, whereby each participant only uses one design, which provides
a clean set of data but requires significantly more participants. Alternatively, you
can ask participants to perform the tasks using one primary design (counter-
balancing the designs) and then show the other design alternatives and ask for
their preference. This way you can get feedback about all the designs from each
participant.
The most appropriate metrics to use when comparing multiple designs may
be issue-based metrics. Comparing the frequency of high-, medium-, and low-
severity issues across different designs will help shed light on which design or
designs are more usable. Ideally, one design ends up with fewer issues overall
and fewer high-severity issues. Performance metrics such as task success and task
times can be useful, but because sample sizes are typically small, these data tend
to be of limited value. A couple of self-reported metrics are particularly relevant.
One is asking each participant to choose which prototype he would most like to
use in the future (as a forced choice comparison). Also, asking each participant
to rate each prototype along dimensions, such as ease of use and visual appeal,
can be insightful.
3.4 EVALUATION METHODS
One of the great features of collecting UX metrics is that you're not restricted to
a certain type of evaluation method (e.g., lab test, online test). Metrics can be
collected using almost any kind of evaluation method. This may be surprising
because there is a common misperception that metrics can only be collected
through large-scale online studies. As you will see, this is simply not the case.
Choosing an evaluation method to collect metrics boils down to how many par-
ticipants are needed and what metrics you're going to use.
Search WWH ::




Custom Search