Graphics Reference
In-Depth Information
the lab you made qualitative measurements based on specific physical reactions. In a Lean
model you're examining the quantitative data: what percentage of users followed the path
you set out for them and what percentage wandered away? Where did these wanderers stray
from the path? Where do they go? What changes might keep users on the paths where you
want them to be? If they're all doing the same thing, and going the same way, could that
be an alternate route to success? The Lean model opens up the process, giving you more
general data, which informs your revisions by showing you the behavior patterns of a real
audience.
* * *
Another way to express the difference between the qualitative testing in a lab, and the
quantitative approach of going live is this: qualitative asks why while quantitative asks
what. When you have the luxury of time, you can select users on the basis of background,
and the likelihood that they'll use the product. Once you have them in the controlled en-
vironment of the lab, you can direct their experiences, decide what data they need, what
features they see, and then you can observe their reactions. Afterwards, or sometimes even
during the test, you can ask them questions formulated to get to the core of every feature
and function. With this approach, you can expect to find out how individuals react to partic-
ular elements. Their delight becomes an encouraging design factor, while their frustrations
are red flags. It's like stepping into your design, and shining a flashlight in every corner.
When you go live with an iteration of a design, it's more like turning on the lights in
a stadium. This is where testing goes public. You have far too many individual subjects to
pay much attention to each one. Instead they are the crowd. In this Lean paradigm, you
look at this crowd in terms of percentages and groupings. How many are experiencing the
page on mobile devices? What's the percentage of desktop users? How many conversions
do you get with the blue, and how many happen with the red? How many users find a task
to be high effort? Do more people find it to be low or medium effort? Are they willing to
make that effort? How many have trouble checking their email? What percentage tries to
search the site? How many of them find satisfaction? What percentage of visitors actually
turn into actual online buyers?
In the lab, you might ask users how they feel about one component or another. Was it
the right shape or color? Did an experience take longer than you had predicted? If so, was
this a source of frustration? The user will tell you specific reasons why he or she liked or
disliked a particular feature. Looking at the same feature through the lens of a live iteration,
you look for numbers. Did most users take more time with the email feature? Did this extra
Search WWH ::




Custom Search