Geoscience Reference
In-Depth Information
each model and the results are compared with the available calibration data. A quantitative measure of
performance is used to assess the acceptability of each model based on the modelling residuals. Any
of the likelihood measures in Box 7.1 or combinations of the measures in Box 7.2 could serve this
purpose. The only requirements are that the measure should increase monotonically with increasing
goodness of fit and that “nonbehavioural” models should have a likelihood of zero. Different likeli-
hood measures or combinations of likelihood measures will, however, lead to different estimates of the
predictive uncertainty.
In using the model for predictions, all simulations with a likelihood measure greater than zero are
allowed to contribute to the distribution of predictions. The predictions of each simulation are weighted by
the likelihood measure associated with that simulation. The cumulative likelihood weighted distribution
of predictions can then be used to estimate quantiles for the predictions at any time step.
Implementation of the GLUE methodology requires a number of decisions to be made as follows:
which model or models to include in the analysis;
a feasible range for each parameter value;
a sampling strategy for the parameter sets;
an appropriate likelihood measure or measures, including conditions for rejecting models that would
not be considered useful in prediction on the basis of their past performance, so leaving those that are
considered behavioural .
These decisions are all, to some extent, subjective but an important point is that they must be made
explicit in any application. Then the analysis can be reproduced, if necessary, and the decisions can be
discussed and evaluated by others. Some sources of GLUE software are listed in Appendix A.
7.10.1 Deciding on Which Models to Include
Given a large enough sample of Monte Carlo simulations, the range of likelihood weighted predictions
may be evaluated to obtain prediction quantiles at any time step. This is most easily done if the likelihood
values are renormalised such that L [ M ( i )] = 1, where M ( i ) now indicates the i th Monte Carlo
sample, so that at any time step t :
N
P Q t <q =
L M ( i )
| Q i,t <q
(7.4)
i = 1
Q i,t
where
is the variable of interest predicted by the i th Monte Carlo sample and N is the number
of samples. The prediction quantiles, P Q t <q , obtained in this way (as shown, for example, in
Figure 7.8b) are conditioned on the inputs to the model, the model responses for the particular sample
of parameter sets used, the subjective choice of likelihood measure and the observations used in the
calculation of the likelihood measure. They are, therefore, empirical but note that, in such a procedure,
the simulations contributing to a particular quantile interval may vary from time step to time step,
reflecting the nonlinearities and varying time delays in model responses. It also allows for the fact that
the distributional characteristics of the likelihood weighted model predictions may vary from time step
to time step (see Freer et al. (1996) and the case study in Section 7.11).
Any parameter interactions in the model, and any effects of errors in the input data and observational
data, are implicitly reflected in the likelihood measure associated with each simulation and do not therefore
have to be considered separately. This makes an assumption that the effects will be similar during a
prediction, but avoids the problem that they are very difficult indeed to consider separately.
Search WWH ::




Custom Search