Geoscience Reference
In-Depth Information
with no hydrograph response might be more ambiguous: it could be hydrologically consistent even if
difficult to model and thus be a result of model structure error rather than disinformative data. Even
widely used data sets might show inconsistencies and introduce disinformation into the inference pro-
cess. An example is provided by the Leaf River catchment data that has been used in many model
calibration studies. Beven (2009b) suggested that there might be a period of inconsistent data in the
validation period used in Vrugt et al. (2009). We return to the issue of disinformation in calibration
in Chapter 7.
There are a number of important implications that follow from these considerations:
The parameter values determined by calibration are effectively valid only inside the model structure
used in the calibration. It may not be appropriate to use those values in different models (even though
the parameters may have the same names) or in different catchments.
Care should be taken not to include inconsistent or disinformative data in the calibration process as
this will lead to biased estimates of parameter values. This is one example of a more general problem
of errors and uncertainties in the modelling process that result from a lack of knowledge rather than
statistically random errors.
The concept of an optimum parameter set may be ill-founded in hydrological modelling. While one
optimum parameter set can often be found there will usually be many other parameter sets that are
very nearly as good, perhaps from very different parts of the parameter space. It is highly likely that,
given a number of parameter sets that give reasonable fits to the data, the ranking of those sets in terms
of the objective function will be different for different periods of calibration data. Thus to decide that
one set of parameter values is the optimum is then a somewhat arbitrary choice. Some examples of
such behaviour will be seen in the “dotty plots” of Chapter 7 where the possibility of rejecting the
concept of an optimum parameter set in favour of a methodology based on the equifinality of different
model structures and parameter sets will be considered.
If the concept of an optimum parameter set must be superceded by the idea that many possible parameter
sets (and perhaps models) may provide acceptable simulations of the response of a particular catchment,
then it follows that validation of those models may be equally difficult. In fact, rejection of some of the
acceptable models given additional data may be a much more practical methodology than suggesting
that models might be validated.
The idea of equifinality is an important one in what follows, particularly from Chapter 7 onwards.
It suggests that, given the limitations of both our model structures and the observed data, there may
be many representations of a catchment that may be equally valid in terms of their ability to produce
acceptable simulations of the available data. In essence then, different model structures and param-
eter sets used within a model structure are competing to be considered acceptable as simulators.
Models can be treated and tested as hypotheses about how the catchment system is functioning in
this sense (see Beven, 2010a). Some may be rejected in the evaluation of different model structures
suggested in Section 1.7; even if only one model is retained, the evaluation of the performance of dif-
ferent parameter sets against the observed data will usually result in many parameter sets that produce
acceptable simulations.
The results with different parameter sets will not, of course, be identical either in simulation or in
the predictions required by the modelling project. An optimum parameter set will give only a single
prediction. Multiple acceptable parameter sets will give a range of predictions. This may actually be an
advantage since it allows the possibility of assessing the uncertainty in predictions, as conditioned on
the calibration data, and then using that uncertainty as part of the decision making process arising from
a modelling project. A methodology for doing this is outlined in Chapter 7.
The rest of this topic builds upon this general outline of the modelling process by considering specific
examples of conceptual model and their application within the context of the types of evaluation procedure,
Search WWH ::




Custom Search