Geoscience Reference
In-Depth Information
of “individuals” (different parameter sets) is chosen as a starting point and then allowed to “evolve”
over successive generations or iterations in a way that improves the “fitness” (performance measure)
at each iteration until a global optimum fitness is reached. The algorithms differ in the operations used
to evolve the population at each iteration, which include selection, cross-over and mutation. A popular
description has been given by Forrest (1993); more detailed descriptions are given by Davis (1991). Sen
and Stoffa (1995) show how some elements of simulated annealing can be included in a genetic algorithm
approach. GA optimisation was used by Wang (1991) in calibrating the Xinanjiang model, by Kuczera
(1997) with a five-parameter conceptual rainfall-runoff model and by Franchini and Galeati (1997) with
an 11-parameter rainfall-runoff model.
One form of algorithm that has been developed for use in rainfall-runoff modelling, which combines
hill-climbing techniques with GA ideas, is the shuffled complex evolution (SCE) algorithm developed
at the University of Arizona (UA) by Duan et al. (1992, 1993). In this algorithm, different Simplex
searches are carried out in parallel from each random starting point. After each iteration of the multiple
searches, the current parameter values are shuffled to form new Simplexes which then form new starting
points for a further search iteration. This shuffling allows global information about the response surface
to be shared and means that the algorithm is generally robust to the presence of multiple local optima.
Kuczera (1997) concluded that the SCE algorithm was more successful in finding the global optimum in
a five-parameter space than a classical crossover GA algorithm. The SCE-UA algorithm has become one
of the most widely used optimisation algorithms in rainfall-runoff modelling because of its robustness
in finding the global optimum on complex surfaces. The methodology has also been incorporated into
more general response surface search algorithms (see Box 7.3).
7.5 Recognising Uncertainty in Models and Data: Forward Uncertainty
Estimation
The techniques of Section 7.4 are designed to find an optimum parameter set as efficiently as possible. A
run of the model using that optimum parameter set will give the best fit to the observations used for the
calibration, as defined by the performance measure used . It has long been recognised that different
performance measures, and different calibration datasets, generally result in different optimum parameter
sets. Thus, as far as is possible, the performance measure should reflect the purpose of the modelling.
The optimum parameter set alone, however, will reveal little about the possible uncertainty associated
with the model predictions.
There are many causes of uncertainty in a modelling study of which the most important are as follows
(we consider the additional uncertainties associated with calibration data later):
uncertainties in initial and boundary conditions, including model inputs;
uncertainties in the model structure;
uncertainties in the model parameter estimates;
uncertainties that have been overlooked (including known omissions and unknown unknowns).
All tend to induce uncertainty in the model predictions that should, as far as possible, be assessed.
As noted earlier, not all of the uncertainties will be statistical or aleatory in nature. Very often, for
example, we suspect that patterns of rainfall have been such that an event has been under-recorded
or over-recorded by the available raingauges. Such uncertainties are neither random or systematic, but
rather changing over time as is typical of uncertainties resulting from lack of knowledge. In extreme cases,
data subject to such errors might not be informative in model and parameter identification (Beven and
Westerberg, 2011).
Search WWH ::




Custom Search