Geoscience Reference
In-Depth Information
compare results. Calibration seeks to reproduce recent historical conditions to test
whether models pass the test of plausibility. The fundamental challenge is that
models that seek to examine impacts fi fty or one hundred years into the future
do not have a long enough historical record against which to generate a
comparison.
Practical and conceptual validation of models introduces further challenges.
Practical validation tests the validity of model outcomes, whereas conceptual vali-
dation tests whether the model structure is consistent with current knowledge.
Validation and calibration become more diffi cult the more integrated the model
becomes. There is good evidence (Rotmans and Dowlatabadi, 1998) that simple
models often perform as well as more complex models and much of the work of
validation occurs at the level of submodels. The task of validating an integrated
model involves testing for perverse or implausible outcomes either by direct com-
parison to the systems being studied or through comparison with other models.
Comparisons across models have produced highly divergent results, and even if
outputs are comparable, modellers must show that they get similar answers via the
same causal mechanisms.
Finally, uncertainty analysis may rely on forms of sensitivity testing. One model-
ling team developed an innovative system to utilise networked personal computers
using a small programme downloaded by the user. The programme uses the proces-
sor when the computer is not being used to undertake model runs where the model
parameters are altered minutely. This distributed computing approach engaged
hundreds of thousands of willing participants and allowed the team to test the sen-
sitivity of their model under a scenario where carbon dioxide doubles (Stainforth
et al., 2005). The results of this study have been used to suggest that temperature
changes of up to 11 degrees centigrade are possible with a doubling of carbon
dioxide, but this is a misreading of the results. The exercise simply shows that under
some combinations of input variables, the model is capable of generating changes
of that magnitude. The extreme fi nding would need to be validated on a practical
and conceptual level to show that these parameterisation changes were in fact plau-
sible in the real world.
Over the last decade we have seen very rapid advances in IA models, driven
by improvements in the quality of the underlying science, improvements in
the integration of submodels and, inevitably, increases in computing power.
The distributed solution described above generated the processing power of 1-2
supercomputers.
Prediction, Policy and IA
Ultimately, however, all complex systems and IA models face a shared problem:
they are expected to predict, but are only able to suggest. This dilemma is shared
by all assessment methods, including environmental, risk and cumulative assess-
ment. This is a deeper and more intractable challenge that results from the way
that science is used to underpin a range of assessment methods. There are many
examples where conventional scientifi c research has been responsible for identify-
ing the presence of biogeophysical 'limits to growth'. Iconic examples include
research on the bioaccumulation of DDT through the food web, dramatised by
Rachel Carson's Silent Spring (1962), or the dramatic discovery of a 'hole' in the
ozone layer over the poles by scientists from the British Antarctic Survey. Even
Search WWH ::




Custom Search