Geoscience Reference
In-Depth Information
parameter values is better than another is open to a variety of approaches, from a visual inspection of plots
of observed and predicted variables to a number of different quantitative measures of goodness of fit,
known variously as objective functions, performance measures, fitness (or misfit) measures, likelihood
measures or possibility measures. Some examples of such measures that have been used in rainfall-runoff
modelling are discussed in Section 7.3.
All model calibrations and subsequent predictions are subject to uncertainty. This uncertainty arises in
that no rainfall-runoff model is a true reflection of the processes involved, it is impossible to specify the
initial and boundary conditions required by the model with complete accuracy and the observational data
available for model calibration are not error free. Discussions of the impact of these sources of uncertainty
may be found in the work of Beven (2006a, 2010); see also the exchanges summarised by Beven (2008).
There is a rapidly growing literature on model calibration and the estimation of predictive uncertainty
for hydrological models and this is an area that has greatly developed in the last decade (at least in the
research domain; there has been, as yet, somewhat less impact on practice). This chapter can give only
a summary of the topics involved and a more extensive analysis of the methodologies available can be
found (Beven, 2009). For the purposes of the discussion here, we differentiate several major themes
as follows:
Methods of model calibration that assume an optimum parameter set and ignore the estimation of
predictive uncertainty. These methods range from simple trial and error, with parameter values adjusted
by the user, to the variety of automatic optimisation methods discussed in Section 7.4.
Methods of uncertainty estimation that are based only on prior assumptions about different sources of
uncertainty. These methods are grouped under the name “forward uncertainty analysis” and discussed
in Section 7.5.
Methods of model calibration and uncertainty estimation that use Bayesian statistical methods to con-
dition posterior parameter distributions given some observations about the catchment. These methods
are grouped under the name “Bayesian conditioning” and discussed in Section 7.7.
Methods of model conditioning that reject the idea that there is an optimum parameter set in favour
of the idea of equifinality of models, as discussed in Section 1.8. Equifinality is the basis of the
GLUE methodology discussed in Section 7.10. In this context, it is perhaps more appropriate to use
model “conditioning” rather than model “calibration” since this approach attempts to take account
of the many model parameter sets that give acceptable simulations. As a result, the predictions are
necessarily associated with some uncertainty.
In approaching the problem of model calibration or conditioning, there are a number of very basic
points to keep in mind. These may be summarised as follows:
There is most unlikely to be one right answer. Many different models and parameter sets may give good
fits to the data and it may be very difficult to decide whether one is better than another. In particular,
having chosen a model structure, the optimum parameter set for one period of observations may not be
the optimum set for another period. This is because so many of the sources of error in rainfall-runoff
modelling are not simply statistical in nature ( aleatory uncertainties ). They are much more the result
of a lack of knowledge ( epistemic uncertainties , what Knight (1921) called the real uncertainties). In
principle, epistemic uncertainties can be reduced by more observations and experiment. In practice,
this tends to increase our appreciation of complexity without greatly improving predictions.
Calibrated parameter values may be valid only inside the particular model structure used. It may not
be appropriate to use those values on different models (even though the parameters may have the same
name) or in different catchments.
Model results will be much more sensitive to changes in the values of some parameters than to others.
A basic sensitivity analysis should be carried out early on in a study (see Section 7.2).
Search WWH ::




Custom Search