Geoscience Reference
In-Depth Information
the subject, Oreskes et al. (1994:643) argued, “A match between predicted and
obtained output does not verify an open model. . . . If a model fails to repro-
duce observed data, then we know that the model is faulty in some way,
but the reverse is never the case.” Assuming so is a logical fallacy called af-
firming the consequent. “Numerical models are a form of highly complex sci-
entific hypothesis [unlike simple null models that we are accustomed to test-
ing]; . . . verification is impossible.” The utility of models is to guide further
study or help make predictions and decisions regarding complicated systems;
thus they warrant testing, but that testing should be viewed as a never-ending
process of refinement, properly called benchmarking or calibration. Given the
basis of habitat suitability models and the complexity of their many interact-
ing variables, it is likely that any such model could be improved through rig-
orous testing.
Several attempts have been made to test such models. Often this process is
circular, involving just another panel of experts making qualitative assessments
(O'Neil et al. 1988). In other cases, models have been tested using results of a
study on habitat use (Lancia et al. 1982) or use relative to availability
(Thomasma et al. 1991; Powell et al. 1997), with the inevitable associated
shortcomings discussed in detail in this chapter. In some instances, habitat
management prescriptions based on “common knowledge” or expert opinion
have, through collection of better data, been proven faulty (Brown and Batzli
1984; Bart 1995; Beyer et al. 1996). I found one case in which model-derived
habitat scores for individual home ranges were compared with reproduction,
juvenile growth rates, and home range size, but no significant relationships
were observed (Hirsch and Haufler 1993).
Often, models have been tested by comparing habitat-specific densities to
model predictions. However, even if a model explains a significant portion of
the variation in density (Cook and Irwin 1985), the data collected to test (or
purportedly validate) the model could be better used to modify it or build new
one (Roseberry and Woolf 1998). In most cases, habitat models have proved
to be poor predictors of animal density, indicating either defectiveness of the
model, lack of a clear habitat-density relationship, or effects of other con-
founding factors, such as hunting pressure, which are also habitat-related (Bart
et al. 1984; Laymon and Barrett 1986; Robel et al. 1993; Rempel et al. 1997).
Bender et al. (1996) found that if the variance around the estimated values of
the model inputs were taken into account (a process that is not commonly
done), suitability scores for a variety of habitats that appeared very disparate
were not significantly different; that is, the parameter estimates were not pre-
cise enough to even enable the model to be tested.
Search WWH ::




Custom Search