Geoscience Reference
In-Depth Information
of loss. Thus, the EP ( x>X ) can be defined in terms of the cumulative density function of the risk of
an outcome F ( X ), obtained by integrating over all probabilities of a consequence less than X , given the
probabilities of possible outcomes (both of which might involve some uncertainty):
EP ( x>X ) = 1 F ( X )
(7.8)
Different loss functions, resulting from different decision options, give different levels of risk as
expressed in terms of exceedance probability in this way. Given the design lifetime of a project, the risk
can also be expressed in terms of expected annual damages (Pingel and Watkins, 2010). Other approaches
to decision making under uncertainty are also available (see Beven (2009), Chapter 6).
7.16 Dynamic Parameters and Model Structural Error
It was noted earlier that a completely objective identification of different sources of uncertainty in the
modelling process is very difficult, if not impossible, because of the strong interaction of different sources
in the modelling process and the limited information in a series of model residuals. In particular, the
potential interaction between input errors and model structural errors was noted (see also Beven, 2005).
Model structural error in statistical analysis of uncertainties is generally ignored (unless some identifiable
functional form of model inadequacy or discrepancy function can be defined); model identification is
generally carried out as if the model was correct. But of course, in hydrology, we know very well that
the model is only a very approximate representation of the complexity of the processes occuring in a
catchment, and in some cases might not be at all good. Thus, assuming the model is correct might not be
a good assumption.
So the question arises as to whether information can be gained about model structural error given the
uncertainties in the modelling process. In some studies, simple intercomparisons between different model
structures have been made (e.g. Refsgaard and Knudsen, 1996; Butts et al. , 2004). In one study, Perrin
et al. (2001) compared 19 different models on 429 catchments. These were daily lumped catchment
models but some of their conclusions should be expected to hold more generally. They showed that
more complex models generally do better in simulating the observed discharges in calibration but do not
necessarily provide better predictions out of calibration, suggesting the more complex models are subject
to overfitting. Refsgaard et al. (2006) also give a nice example of how the implementation of a single
model structure by different groups of modellers can be very different. Commonly, this type of study has
suggested that there may be little to choose between the performance of quite different models, and that
combinations of models (for example using Bayesian Model Averaging), can perform better than any
single model (e.g. Hsu et al. , 2009). As noted earlier, multiple model structures are easily incorporated
into the equifinality principle and the GLUE framework, as long as each model structure is subject to the
same type of evaluation to infer a likelihood weighting.
More recent approaches to this problem have been based on the idea that if model structural error
is important it should lead to consistent patterns in the model residuals. One way of trying to identify
such consistent patterns is to allow the distributions of parameters to be time variable. This can be done
in a simple way, for example for investigating whether there is consistency in parameter distributions
between different types of hydrological conditions. Choi and Beven (2007) did this within the GLUE
framework for a small humid catchment in Korea, with 15 classes of hydrological period from wet to dry.
They found that there was no overlap between the posterior distributions of parameters in TOPMODEL
between the wet and dry periods but suggested that, in making predictions, different sets of parameters
might be used in different periods to compensate for the apparent model structural error.
Another approach has been to use filtered estimates of parameter distributions that are allowed to evolve
over time. In the dynamic identifiability (DYNIA) approach of Wagener et al. (2003), it was reasoned that
the distributions of different parameters would be conditioned during different periods and different time
Search WWH ::




Custom Search