Geoscience Reference
In-Depth Information
In evaluating the goodness of fit of models to observations we often find that the response surface in
the parameter space is very complex. There may be flat areas due to a lack of excitation of certain
parameters, long ridges due to interactions between parameters, and many local optima as well as
the point of global best fit. In general, carefully designed, simple models with a small number of
parameters and good numerical implementation avoid the problem of overparameterisation and have
smoother parameter response surfaces but this may be difficult to achieve in rainfall-runoff modelling.
A review of automatic optimisation techniques reveals that many have difficulties in finding the global
optimum on a complex response surface. Techniques such as simulated annealing or genetic algorithms,
such as the Shuffled Complex Evolution algorithm, have been designed to be more robust in finding
the global optimum.
Set theoretic techniques based on Monte Carlo simulation suggest, however, that the idea of an optimum
parameter set might be illusory and would be better replaced by a concept of equifinality in simulating
a catchment indicating that there may be many different model structures or parameter sets that could
be considered as acceptable in simulation.
The concept of the Pareto optimal set allows that multi-criteria parameter optimisation might result
in a set of models, each of which achieves a different balance between the different performance
measures, but all of which are better than models outside the optimal set. This results in a range of
different predictions from the different models in the set, but the range of predictions may not bracket
the observations.
Bayesian statistical methods make specific assumptions about the structure of model residuals from
which the definition of a likelihood function follows. This generally leads to over-conditioning of
posterior parameter distributions when the errors are due to epistemic errors rather than only aleatory
errors.
Generalised likelihood uncertainty estimation (GLUE) is one technique for conditioning of model
parameter sets based on the equifinality concept. In the GLUE methodology, many different model
runs are made using randomly chosen parameter sets. Each run is evaluated against observed data
by means of a likelihood measure. If a model is rejected, it is given a likelihood value of zero.
The likelihood measures are then used to weight the predictions of the retained models to calculate
uncertainty estimates or prediction limits for the simulation. Likelihood values from different types of
data may be combined in different ways or updated as more data are collected.
This approach to modelling focuses attention on the value of different types of data in rejecting or
falsifying models. Hypothesis tests may be formulated to refine the set of acceptable models in a truly
scientific way. Some compromise in such tests is generally necessary, however, since if the criteria for
acceptability are made too strict, all models are rejected.
The assessment of the uncertainty associated with a set of model predictions is also an assessment
of the risk of a certain outcome that can be used in a risk-based decision analysis for the problem
under study. Taking account of uncertainty might make a difference to the decision that is made
and it might therefore not be good practice to rely on deterministic simulations in informing the
decision process.
There is still much to be learned about the information content in periods of calibration data of
different types for model evaluation and the constraint of prediction uncertainty. Because of the
limitations of hydrological data, some periods might actually be disinformative if used in this
way. There will also be more information in periods of hydrologically consistent data that show
quite different characteristics than there is in additional data of similar characteristics to the data
already available.
There is little guidance available on the value of different types of data in constraining the uncertainty
associated with model predictions. This is best posed as a problem of how to spend a budget on
observational data.
Search WWH ::




Custom Search