Environmental Engineering Reference
In-Depth Information
the known behaviour of the system. There are usually
good reasons for this outcome, so do not give up! Given
that the parameter base for distributed models is gener-
ally small relative to the detail simulated, it is perhaps
not surprising. Similarly, lumped models have a sparse
parameter representation relative to natural variability in
the system. Point measurements or spatial averages are
often poor representations of the parameter interactions
in this case. Evaluating errors from these sources will be
dealt with later.
However, in terms of model parameterization, it may
be impossible to return to the field to carry out more mea-
surements, but we still need to obtain results for our model
application. Thus, we need to adopt an iterative approach
to the evaluation of the correct model parameters. This
procedure is generally known as model calibration.
and techniques will thus depend upon the purpose to
which the model will be put. Moreover a model cali-
bration by one user with a particular understanding of
its function may be quite different from that of another
(Botterweg, 1995) and a model calibrated to a particular
objective such as the prediction of peak runoff may then
be useless in the prediction of total annual runoff. Some
prior knowledge of, for example, the reasonable ranges of
parameter values, will also be necessary and calibration
will usually follow a preliminary sensitivity or uncertainty
analysis, which is performed to test the validity of the
model. The relationship between the range of values for
a parameter and the model agreement is known as the
calibration curve for that parameter. A parameter that
shows a significant change in error with a change in its
value (with all other parameters held constant) is known
as a sensitive parameter. If a model has only one parame-
ter, it is usually fairly straightforward to find the optimal
value for that parameter. This procedure becomes only
marginally more difficult for models with more than
one parameter where the parameters are independent. In
most models, parameters are highly interdependent and
this will confound the definition of an optimum parame-
terization. In these cases other - automated - techniques
are used to define the optimum parameter set. These
techniques include genetic algorithms and fuzzy logic
approaches as used to calibrate a rainfall runoff model
to multiple objectives (peak discharge, peak time and
total run off volume) by Cheng et al . (2002). Calibra-
tion is particularly challenging in distributed models,
which tend to have a large number of parameters and
stochastic algorithms or evolutionary genetic ones seem
to be the most successful approaches under these cir-
cumstances (Eckhardt and Arnold, 2001) and have been
applied widely where there are multiple objectives of the
calibration (Madsen, 2000; 2003). In distributed mod-
els, there may also be advantages of calibrating different
areas, such as subcatchments, separately and indepen-
dently rather than as an integrated whole (e.g. Seibert
et al ., 2000). Ratto et al . (2001) highlight the utility of
the global sensitivity analysis (GSA) and generalized like-
lihood uncertainty estimation (GLUE) approaches (see
below) in the calibration of over-parameterized mod-
els with strong parameter interaction. Global sensitivity
analysis is a model-independent approach, which is based
on estimating the fractional contribution of each input
factor to the variance in the model output, accounting
also for interaction terms. GLUE allows model runs to be
classified according to the likelihood of their being a good
simulator of the system recognizing that many different
2.2.3 Calibrationand its limitations
Kirkby et al . (1992) distinguish between physical param-
eters, which define the physical structure of the system
under study, and process parameters, which define the
order of magnitude of processes. Most models will con-
tain both types of parameter. Definition of these process
parameters is known as calibration or model tuning.
Where they are physically based this definition can be
achieved by their measurement; otherwise they are cali-
brated using a process of optimization (optimized) against
a measure of the agreement between model results and a
set of observations used for calibration. The calibration
dataset must be independent from any dataset which is
used later to validate the model, if the same dataset is used
for both it should be no surprise that the model is a perfect
predictor! Split-sample approaches, in which the avail-
able data is separated into a calibration set and a separate
validation set, is usually the solution to this problem.
Calibration should pay particular attention to the sen-
sitivity of parameters with sensitive parameters being
calibrated carefully against high quality datasets to ensure
that the resulting model will produce reliable outcomes.
The simplest form of optimization is trial and error
whereby model parameters are altered and a measure of
goodness of fit between model results and the calibra-
tion dataset is noted. This process is repeated iteratively
to obtain the best possible fit of observed against pre-
dicted. Of course the calibration will be specific to the
model results calibrated against and will produce a model,
which should forecast this result well at the expense of
other model outputs not involved in the calibration pro-
cedure. The choice of calibration parameters, measures
Search WWH ::




Custom Search