Database Reference
In-Depth Information
Invariably, some folks like Design 1 and other folks like Design 2, and they're
passionate about their preferences. During multiple reviews, the advocates for each
design become more strident and dig in. Consensus is elusive, if not impossible.
During what is supposed to be the “inal review,” tempers rise, and the two camps
are getting increasingly agitated, extolling the sophistication of their design over the
other. There are other subplots, because designer Josh Cheysak (Design 1) is hoping
his design is chosen to increase his chances of a raise, and designer Autumn Taylor
(Designer 2) is hoping her design is chosen because she's bucking for the Creative
Director position. Nobody will budge. Think Boehner and Obama during the Great
Government Shutdown of 2013.
Just as Cheysak is about to pour his Red Bull all over Taylor's Moleskine sketch-
topic, you cautiously offer to perform a “head-to-head comparison survey with inde-
pendent samples.”
A hush settles over the room. Taylor inally breaks the silence: “A what?”
You calmly explain that by running a survey featuring two different designs with
two different groups of people, you may be able to determine differences between
the two designs in terms of perceived sophistication and preference ratings. In other
words, you can determine which one is best, based on user feedback. Trying not to
sound too professorial, you add that “proper statistical analysis and good survey
design can guard against obtaining misleading results.”
The attendees simultaneously nod in agreement but McCarthey won't agree to
anything without “pushback.”
“Where are you getting these participants?” McCarthey asks with skepticism in
her voice.
“We can use our client list, and offer them a $25 gift certiicate for completing a
survey. After all, they are the target audience, but they haven't seen any one of these
two designs. Of course, we'll collect demographic data about each participant.”
“Will everyone see both designs?” McCarthey asks. “Doesn't sound right.”
“In this case, it's probably better that one group sees one design and another
group sees the other design. This eliminates any bias from seeing a second design
after seeing the irst. I can randomize which participant sees which design.”
“Hmmm,” McCarthey says, staring down at her iPhone, half-listening. “I guess
it's worth a try.”
Ah, a peaceful resolution is in sight. Before you can say “Camp David Peace
Accords,” the tension in the conference room evaporates. No more bruised egos, no
more subterfuge. Just an easy way to let data do the talkin' and to determine the cor-
rect design based on objective evidence, not hunches and egos.
2.3 COMPARING TWO MEANS
In Chapter 1 on basic statistical thought and the role of uncertainty, we covered a
topic called “hypothesis testing,” and, we discussed how, when data values are col-
lected from a sample of the population, the mean of 10, or 100, or even 1000 people,
 
Search WWH ::




Custom Search