Information Technology Reference
In-Depth Information
the alpha level that you use for a particular statistical analysis. For example,
in a very preliminary and exploratory phase of a research program, where
you are not sure which variables are important and which are not, you
may not want to miss an effect just because you have not controlled for
all of the effects that you should. Here you might want to modify your
alpha level to .10 or greater to be able to recognize a potential effect that
is worth pursuing in the future.
On the other hand, if you are performing a large number of compar-
isons of means, or performing several related analyses, you will presumably
find 5 percent of them to be significant even if the null hypothesis is true.
Thus, you may not be able to distinguish which effects are obtained by
chance and which are due to some valid effect of the independent variable.
Under these situations, it is common practice to avoid such alpha inflation
and change your alpha level to a more stringent value. There are a variety
of rubrics that are documented in a number of sources (e.g., Keppel &
Wickens, 2004; Maxwell & Delaney, 2000) that can be used to help ensure
that the alpha level of .05 holds across the range of your analyses. For
example, a Bonferroni correction would involve dividing the alpha level
of .05 by the number of comparisons we were making. Thus, if we were
making five comparisons, our Bonferroni-corrected alpha level would be
.01 (
.
05
/
5
= .
01). We discuss this topic in more detail in Chapter 7 in
Section 7.2.
4.4.5 TYPE I AND TYPE II ERRORS
In Equation 4.1, we obtained an F ratio of 18.75 and asserted that we
were looking at a significant mean difference. The result is statistically
significant in the sense that a variance ratio of 18.75 would ordinarily be
obtained less than 5 percent of the time; that is, the probability of that
value occurring by chance alone if the null hypothesis is true is less than
.05 ( p
<.
05). Of course, in the long run we will be wrong 5 percent of
the time (because F ratios that large actually are observed in the sampling
distribution even though such occurrences are infrequent), but we are
willing to risk committing this error because we will never be able to be
absolutely certain of anything.
The error just described is called a Ty p e I e r r o r .Itoccurswhenwe
are wrong in rejecting the null hypothesis. In such a situation, the means
are not “truly” different (their difference is essentially zero), because the
means are derived from the same population, but we make the judg-
ment that the means were significantly different. A Type I error is there-
fore a false positive judgment concerning the validity of the mean dif-
ference obtained. The chances of making a Type I error correspond to
our alpha level. In this case the probability of committing a Type I error
is .05.
A Type II error is the other side of the coin. The reality is that the
means did come from different populations, and we should have properly
Search WWH ::




Custom Search