Geoscience Reference
In-Depth Information
Different performance measures will usually give different results in terms of both the “optimum”
values of parameters and the relative sensitivity of different parameters.
Sensitivity may depend on the period of data used and, especially, on whether a particular component
of the model is “exercised” in a particular period. If it is not, for example if an infiltration excess runoff
production component is only used under extreme rainfalls, then the parameters associated with that
component will appear insensitive.
Model calibration has many of the features of a simple regression analysis, in that an optimum parameter
set is one that, in some sense, minimises the overall error or residuals. There are still residuals, however,
and this implies uncertainty in the predictions of a calibrated model. As in a statistical regression, these
uncertainties will normally get larger as the model predicts the responses for more and more extreme
conditions relative to the data used in calibration.
Both model calibration (in the sense of finding an optimum parameter set) and model conditioning
(finding a joint posterior parameter distributions) will depend on the shape of a response surface in
the parameter space for the chosen performance or likelihood measure. The complexity of that surface
depends on the interaction of the model with errors in the input data and, in some cases, on the
implementation of the numerical solution to the model equations.
Because of epistemic uncertainties, there can be no completely objective analysis of uncertainty in
rainfall-runoff modelling. The analysis, therefore, depends on a set of assumptions that a modeller is
prepared to accept and justify with a particular purpose in mind. Thus, the results of such an analysis
should always be associated with a clear exposition of the assumptions on which it is based.
7.2 Parameter Response Surfaces and Sensitivity Analysis
Consider, for simplicity, a model with only two parameters. Some initial values are chosen for the
parameters and the model is run with a calibration data set. The resulting predictions are compared with
some observed variables and a measure of goodness of fit is calculated and scaled so that, if the model
was a perfect fit, the goodness of fit would have a value of 1.0 and if the fit was very poor, it would have
a value of zero (specific performance measures are discussed in Section 7.3). Assume that the first run
resulted in a goodness of fit of 0.72, i.e. we would hope that the model could do better (get closer to a
value of 1). It is a relatively simple matter to set up the model, change the values of the parameters, make
another run and recalculate the goodness of fit. This is one of the options provided in the TOPMODEL
software (see Appendix A). However, how do we decide which parameter values to change in order to
improve the fit?
One way is by simple trial and error: plotting the results on screen, thinking about the role of each
parameter in the model and changing the values to make the hydrograph peaks higher, the recessions
longer or whatever is needed. This can be very instructive but, as the number of parameters gets larger,
it becomes more and more difficult to sort out all the different interactions of different parameters in the
model and decide what to change next (try it with the demonstration TOPMODEL software in which up
to five parameters may be changed). In the very early days of rainfall-runoff modelling, there was a story
that the only person who could really calibrate the Stanford Watershed Model, with all its parameters,
was Norman Crawford who wrote the original version of the model as part of his PhD thesis.
7.2.1 Defining a Response Surface
Another way is to make enough model runs to evaluate the model performance in the whole of the
parameter space . In the simple two-parameter example, we could decide on a range of values for each
parameter, use 10 discrete increments on each parameter range and run the model for every combination
Search WWH ::




Custom Search