Database Reference
In-Depth Information
And therefore, we reach the same conclusion as we did from the Excel analysis:
we reject H0, and go with H1, and thus conclude that there is suficient evidence to
be convinced that the true mean sophistication values for the two designs are actu-
ally different .
In essence, the p -value of 0.023 is saying that a difference of 0.82 (or more) in the
sample means of the two designs has only a 2.3% probability of occurring if the true
means are, indeed, the same (i.e., if H0 is true).
If you were Perry Mason or Columbo or any lawyer (OK, we're dating our-
selves), you might even say that the difference is “beyond a reasonable doubt,” since
the traditional cutoff point is 0.05 and the p -value is under 0.05. Clearly, the direc-
tion of the difference is that Design 1 is better (i.e., the design with the higher mean
sophistication).
Underneath the software hood, the value of 0.023 (or the 0.025) is determined as
a function of (1) the observed difference in the means (the 0.82), (2) the two sample
sizes, and (3) the variability of the sophistication values within each design.
We are delighted that the software does all the necessary mathematics, and
provides us the p -value, so we avoid doing all the math—with a not-so-simple
set of formulas involved. And we also avoid needing to go to a t-table (remem-
ber your undergrad stats course?) and finding the critical value for determining
whether a “result is significant”—a synonym for a result that rejects H0. By
getting a significant result, we have a clear decision about which design should
be viewed as superior (as measured by mean sophistication). Ultimately, to get
our answer (accept H0 or reject H0), all we had to do is observe the p -value and
note whether it was above or below 0.05. Thus, again, the “mantra”: p-value
says it all .
2.6 BUT WHAT IF WE CONCLUDE THAT THE MEANS AREN'T
DIFFERENT?
What if we did not ind a signiicant result (i.e., a signiicant difference between the
means for the two designs) and thus, the data did not indicate a clear winner between
the two designs? And what if increasing the sample sizes with another survey still
produced no signiicant difference?
If there is no clear winner, there's no need to run for the hills with your tail
between your legs. Invariably, other factors will help your team make a decision.
For example: one design may be cheaper to implement because it contains original
artwork, whereas the other design relies on expensive licensed photography. Or one
design is more expensive because it will mean 2 weeks of Flash programming work,
while the other design requires none. Perhaps you use Design 2 (the couple under the
umbrella) because you're launching the new home page in the fall, and you switch to
Design 1 (the café scene) come spring.
In any case, you've done your job—and you're ready to unveil the results.
 
Search WWH ::




Custom Search