Geoscience Reference
In-Depth Information
even if the full range is truncated, as has been the case in some studies. This is currently the state
of the art in Bayesian inference as applied to hydrological models. It has been applied as part of the
Bayesian Total Error Analysis (BATEA) approach by Kuczera et al. (2006, 2010b; Thyer et al. , 2009;
Renard et al. , 2010) and the Differential Evolution Adaptive Metropolis (DREAM) approach by Vrugt
et al. (2008b, 2009). Identification of the posterior distributions of multipliers and model parameters in
these cases involves the use of efficient Monte Carlo techniques (see Box 7.3). It is worth noting that the
implementation of these types of method needs to be done with care, even in the case that the assumptions
about the various sources of uncertainty in the hierarchical structure can be considered valid (e.g. Renard
et al. , 2009).
7.9 Model Calibration Using Set Theoretic Methods
There is another approach to model calibration that relies much less on the specification of a statistical
likelihood function and the idea of a maximum likelihood or optimal model. It was noted in Section 1.8
that detailed examination of response surfaces reveals many different combinations of parameter values
that give good fits to the data, even for relatively simple models. The concept of the maxiumum likelihood
parameter set may then be ill-founded in hydrological modelling, carried over from concepts of statistical
inference. A basic foundation of the theory of statistical inference is that there is a correct model; the
problem is to estimate the parameters of that model given some uncertainty in the data available. In
hydrology, it is much more difficult to make such an assumption. There is no correct model, and the data
available to evaluate different models may have large uncertainty associated with it, especially for the
extreme events that are often of greatest interest.
An alternative approach to model calibration is to try to determine a set of acceptable models. Set
theoretic methods of calibration are generally based on Monte Carlo simulation. A large number of runs
of the model are made with different randomly chosen parameter sets. Those that meet some performance
criterion or criteria are retained, those that do not are rejected. The result is a set of acceptable models,
rather than a single optimum model. Using all the acceptable models for prediction results in a range
of predictions for each variable of interest, allowing an estimation of prediction intervals. This type of
method has not been used widely in rainfall-runoff modelling (with the exception of the GLUE variant
described in Section 7.10) but there were a number of early studies in water quality modelling (see, for
example Klepper et al. , 1991; Rose et al. , 1991; van Straten and Keesman, 1991).
An interesting development in set theoretic approaches has been the multi-criteria calibration strategy
of Yapo et al. (1998) and Gupta et al. (1998). Their approach is based on the concept of the Pareto
optimal set, a set of models with different parameter sets that all have values of the various performance
criteria that are not inferior to any models outside the optimal set on any of the multiple criteria. In
the terminology of the method, the models in the optimal set are “dominant” over those outside the
set. Yapo et al. (1998) have produced an interesting method to define the Pareto optimal set, related to
SCE optimisation (Section 7.4). Rather than a pure Monte Carlo experiment, they start with N randomly
chosen points in the parameter space and then use a search technique to modify the parameter values
and find N sets within the Pareto optimal set (Figure 7.6). They suggest that this will be a much more
efficient means of defining the Pareto optimal set.
They demonstrate the use of the model and the resulting prediction limits with the Sacramento ESMA-
type rainfall-runoff model, used in the US National Weather Service River Forecasting System, in an
application to the Leaf River catchment, Mississippi. The model has 13 parameters to be calibrated.
Two performance measures were used in the calibration, a sum of squared errors and a heteroscedastic
maximum likelihood criterion. 500 parameter sets were evolved to find the Pareto optimal set, requiring
68 890 runs of the model. The results are shown in Figure 7.7, in terms of the grouping of the 500 final
parameter sets on the plane of the two performance measures (from Yapo et al. , 1998) and the associated
Search WWH ::




Custom Search