Geoscience Reference
In-Depth Information
Figure 7.19 The South Tyne at Featherstone: estimated runoff coefficients for events; black circles have values
between 0.3 and 0.9 (after Beven et al., 2011).
from error characterstics that are changing over time, it should be possible to identify periods that are
physically inconsistent with the water balance. We can be sure that they will have an effect on all model
calibration strategies and increase the possibility of rejecting a good model because of epistemic errors
in the calibration data. This possibility makes a very strong case for quality assurance of the calibration
data prior to running any model. It is clear that if the issue of disinformation in model calibration is
ignored, and all residuals are treated as being aleatory and informative, we should expect to be wrong in
our inference. We should try to avoid that in trying to do good hydrological science. We should, therefore,
try to find ways of at least mitigating the worst effects of disinformative data on inference about models
and parameters (see Beven et al., 2011).
There is one further issue about the calibration data in rainfall-runoff modelling and that is the additional
information that might be added by longer periods of calibration data. Statistical likelihood functions
will generally result in further stretching of the response surface and shaping of posterior parameter
distributions as more calibration data and longer residual series are added (this is again the coherence
argument of Mantovan and Todini, 2006). The new residuals, if of similar characteristics to those that
have been seen before, are reinforcing the inference based on earlier periods. This is, however, a choice
and an argument could also be made that if the new data has similar characteristics to previous calibration
data then not much real additional information is being added (and certainly much less than a statistical
likelihood function would suggest, see Box 7.1). If the new data is of quite different characteristics,
however, then there is an opportunity to test a model against out-of-previous-sample predictions. The
information then provided is much greater and should be given greater weight in conditioning (see also
Chapter 12).
What are the implications of these issues about how the likelihood of a particular model and parameter
set should be assessed? The answer is that we do not really know. We would wish to use as much
information as possible from as wide a range of observations as possible in deciding on a relative
likelihood, but avoid periods of disinformation and avoid over-conditioning on data that is subject to
epistemic errors. There is still very much to be learned about this part of the modelling process. It can
Search WWH ::




Custom Search