Database Reference
In-Depth Information
SIDEBAR: CAN I USE A HIGHER ALPHA?
We should at this point note again what a p -value means. If the p -value is, for example, 0.04, this
says that assuming H0 is true, the probability of inding the result we found from the data, or a
result further away from what H0 says (at the equal to point), is 0.04. We typically set a benchmark
of 0.05, and if the probability expressed by the p -value is under 0.05, we call it “signiicant,” and
reject H0. However, what if the p -value is 0.20? This would not be called a signiicant result. Still,
it does indicate that if H0 is true, the aforementioned probability is 0.20; some might view this as
indicating that there is an 80% chance that H0 is false, and further, view that as a reason to “bet on”
H1. This reasoning is faulty for some subtle reasons way beyond the scope of the text, that have to
do with Bayesian statistics and prior probabilities. This is why the classical statistics approach of
hypothesis testing that threads throughout this text has survived for so long and is routinely utilized
as we have described it.
2.7 FINAL OUTCOME AT MADEMOISELLE LA LA
As UX researcher at Mademoiselle La La, you're now ready to conidently announce
the winner. After all, the p -value of 0.023 is saying that a difference of 0.82 (or more)
between the means of the two designs has only a 2.3% probability of occurring if
the true means are actually the same. That's pretty low, so you're able to deliver
the news: the higher sophistication mean of 4.22 for Design 1 over Design 2's 3.40
clearly makes Design 1 the better choice for new home page.
That is, “better” from the perspective of sophistication as determined by a sample
from your target audience. Your analysis has shown that perceptions for the image of
the young girl being adored by the handsome young man are more “sophisticated”
than that of the couple in the rain, clutching the single umbrella. The reason for
that perception difference is another matter, and perhaps one that you'll be asked
to explore. But for now, just knowing the perception difference exists is tangible
progress.
At the next creative staff meeting, you're ready for the inevitable question from
Creative Director Kristen McCarthey:
“So Sherlock, what did the survey tell us about the designs? You got a winner?”
“Yes,” you calmly reply. “I have some reliable results.”
You connect your laptop to the projector and show the group some screen shots
from your Excel output. You show the sample sizes, individual scores, means, and
p -values. You're ready for your short and sweet proclamation:
“What the survey data shows is that Design 1 is perceived as more sophisticated
than Design 2 by representative members of our target audience of women ages 18-55
with well-above-average disposable income. Design 1 got a 4.22 compared to 3.40 for
Design 2. Furthermore, the low p -value of 0.023 means that we have a statistically sig-
niicant difference between the two designs. We should launch with Design 1.”
A hush settles over the conference room. Your colleagues seem impressed, but
McCarthey isn't ready to concede anything yet.
“What's your sample size?” she asks.
 
Search WWH ::




Custom Search