Geoscience Reference
In-Depth Information
tionalist, approaches to model evaluation from relativist and anti-foundationalist
approaches. 'Classical' objectivist approaches to model analysis hinge on the 'con-
frontation' of a model with data, with the aim of establishing resemblance between
the model's predictions and observations of the 'real' world; they emphasise the
empirical verifi cation of models and their outcomes. The tools used for establishing
resemblance include graphical and visual diagnostics (e.g., time-series, residuals
plots) and statistical (e.g., correlation and regression analyses, t -tests, summary dif-
ference measures) analyses (Mayer and Butler, 1993). Confrontational evaluation
tends to emphasise an 'either-or' perspective: either the model and the predictions
it generates are unambiguously valid, or they are rejected as unambiguously inde-
fensible, with little in-between (Oreskes et al., 1994; Kleindorfer et al., 1998).
Contemporary philosophy of science emphasises several problems with the
objectivist view that there is any unambiguous and impartial foundation for evaluat-
ing models and theories through some kind of self-evident and unproblematic con-
frontation with empirical data (Kleindorfer et al., 1998). First, recent discussions
of model evaluation focus on the problems in seeing a model as 'true' (Rykiel, 1996;
Oreskes, 1998; Brown et al., 2006). But second, even those embracing the idea of
falsifi cation as an alternative to the idea of validation must confront the problem
of underdetermination. Observational data, it is argued, do not provide unambigu-
ous grounds for evaluating theories as infi nitely many hypotheses might explain a
given dataset, even if only a small subset of these are actually plausible. This means
that just because a model's predictions match empirical observations to some accept-
able level, a model cannot be deemed either 'true' or 'correct'. A subset of the
underdetermination problem is equifi nality where there may be 'multiple model
representations that provide acceptable simulations for any environmental system'
(Beven, 2002, p. 2417). Finally, even the observed data used in the validation
process carry assumptions, and so their place as a unique or truthful description of
a system or phenomenon is itself questionable (Oreskes et al., 1994; Kleindorfer
et al., 1998).
Even if their truth cannot be demonstrated incontrovertibly, models do have
utility for elucidating how a system 'works' and for isolating where epistemic uncer-
tainty is highest. Thus, and in keeping with a more exploratory approach to model-
ling, alternative modes of model evaluation have been developed, which tend to
focus on what has been learned rather than on assessing the degree to which obser-
vations match model predictions. The adoption of more experimental approaches
towards simulation modelling is premised on the belief that if models are experi-
ments they should be evaluated as such (Dowling, 1999; Peck, 2004). One such
approach is pattern-oriented modelling (POM - Wiegand et al., 2003; Grimm et
al., 2005). POM involves the use of multiple observed spatio-temporal patterns with
the aim of optimising model structure (by identifying components of the model
central to aspects of observed behaviour), reducing parameter uncertainty, and
testing and exploring alternate model representations (Grimm et al., 2005). Another
more experimental approach is what Castella et al. (2005) call 'social validation' in
which a model's users collectively agree that a model is a legitimate representation
of the system (cf. Küppers and Lenhard, 2005); again, this is very different from
the traditional emphasis on resemblance between observations and predictions.
Castella et al. argue that social validation is crucial in participatory modelling,
stating (p. 27) 'a model can only be used as a mediating tool for concerted action
once it has been perfectly understood and is considered by decision makers to be
Search WWH ::




Custom Search