Agriculture Reference
In-Depth Information
The sensitivity of the forecaster is defined as a/(a+c); that is, the frequency of true
positive predictions divided by the total number of situations where action was
actually needed (so called 'cases'). The specificity of the forecaster is defined as
d/(b+d); that is, the frequency of true negative predictions divided by the total
number of situations in which action was not needed (so called 'controls'). The
likelihood ratio for a positive prediction (LR + ) is defined as the sensitivity/[1 -
specificity] ( i.e. the true positive proportion divided by the false positive
proportion). The likelihood ratio for a negative decision (LR - ) is defined as [1 -
sensitivity]/specificity ( i.e. the false negative proportion divided by the true negative
proportion). In order to construct Table 12.1 it is necessary to have an independent
means of distinguishing cases from controls; i.e . , the forecaster must be assessed
against an independent 'gold standard'. Murtaugh (1996) provides a useful
discussion of this issue in the wider context of ecological indicators. The version of
Bayes's theorem given in equation 12.1 allows the LR x for a forecaster to be
combined with an initial assessment of the need for action (the 'prior probability') to
produce an updated assessment (the 'posterior probability').
In practical applications of Bayes's theorem in the evaluation of plant disease
forecasters, the prior odds have often been based on the long-term prevalence of
known cases, and decision makers' judgements of risk are accommodated by
specifying different criteria for allocating individual tests to the categories of
positive or negative predictions (Hughes et al ., 1999; Yuen and Hughes, 2002).
However, we note that the prior and posterior odds to which equation 12.1 refers
may be based on a variety of formal and informal pieces of information and will, in
almost every practical situation involving a real decision maker, contain some
element of subjective judgement. Indeed, Howson and Urbach (1989; p39) argue
that all probabilities (and hence odds) “ should be understood as subjective
assessments of credibility, regulated by the requirement that they be overall
consistent ”. Some scientists may feel uncomfortable with the idea of allowing a
definition of probability which uses the word 'subjective' but it should not be a
difficult definition to accept for situations in which people are asked to make
estimates of probabilities (or odds) without recourse to formal calculation. It should
not, therefore, be too difficult to accept as the relevant description of probability for
the context of the current discussion, where decision makers are using and assessing
IT-based disease forecasters. We note that this view of probabilities as subjective
judgements is in keeping with Campbell and Madden's (1990) comment on the
importance of the perceptions of growers about the usefulness of forecasters in their
adoption.
Consider the potential applications of a forecaster for a grower who has already
obtained an initial assessment of the need for action. A positive prediction ought to
increase the chance that action will be needed. In this case, LR + will be the relevant
indicator of performance. A negative prediction ought to decrease the chance that
action will be needed, in which case LR - is of interest. It is not a reasonable
expectation that any forecaster will give perfect performance. Equation 12.1 allows
the limits for what should be expected from forecasters (given an initial assessment
of the need for action) to be calculated for either positive or negative predictions
(Yuen and Hughes, 2002; Yuen and Mila, 2003).
Search WWH ::




Custom Search