Information Technology Reference
In-Depth Information
“The uniform(0,1) distribution is also the beta(1,1) distribution.
Updating the beta( a , b ) distribution after s successes and f failures is easy,
namely, the new distribution is beta( a + s , b + f ). So for s = 18 and f = 3,
the posterior distribution under H 1 is beta(19,4).”
The subjective approach places an added burden on the experimenter.
As always, she needs to specify each of the following:
Maximum acceptable frequency of Type I errors (that is, the
significance level)
Alternative hypotheses of interest
Power desired against each alternative
Losses associated with Type I and Type II errors
With the Bayesian approach, she must also provide a priori probabilities.
Arguing in favor of the use of subjective probabilities is that they permit
incorporation of expert judgment in a formal way into inferences and
decision-making. Arguing against them in the words of the late Edward
Barankin, “How are you planning to get these values—beat them out of
the researcher?” More appealing, if perhaps no more successful,
approaches are described by Good [1950] and Kadane et al. [1980].
Bayes' Factor
An approach that allows us to take advantage of the opportunities Bayes'
Theorem provides while avoiding its limitations and the objections raised
in the courts is through the use of the minimum Bayes' factor introduced
by Edwards et al. [1963].
The Bayes factor is a measure of the degree to which the data from a
study moves us from our initial position. Let B denote the odds we put on
the primary hypothesis before we examine the data, and let A be the odds
we assign after seeing the data; the Bayes factor is defined as A / B .
If the Bayes factor is equal to 1/10th, it means that the study results
have decreased the relative odds assigned to the primary hypothesis by
tenfold. For example, suppose the probability of the primary hypothesis
with respect to the alternate hypothesis was high to begin with, say 9 to 1.
A tenfold decrease would mean a change to odds of 9 to 10, a probability
of 47%. A further independent study with a Bayes factor of 1/10th would
mean a change to a posteriori odds of 9 to 100, less than 9%.
The minimum Bayes factor is calculated from the same information used
to determine the p value, and it can easily be derived from standard ana-
lytic results. In the words of Goodman [2001], “If a statistical test is
based on a Gaussian approximation, the strongest Bayes factor against the
null hypothesis is exp(- Z 2 /2), where Z is the number of standard errors
from the null value. If the log-likelihood of a model is reported, the
minimum Bayes factor is simply the exponential of the difference between
Search WWH ::




Custom Search