Geology Reference
In-Depth Information
for an event-based erosion model the uncertainty
bands (in this case 95% confidence intervals)
around the predictions could be expected to be
large, although the model in most cases was able
to encompass the observed data. To try to reduce
the width of the 95% confidence intervals, Quinton
went on to select only those parameter sets which
passed a performance criteria for a series of train-
ing storms, before applying the better performing
parameter sets to the new observations. This had
the desired effect of reducing the 95% confidence
interval width, but resulted in the model failing to
encompass much of the observed data. Other pub-
lished studies in which EUROSEM has been evalu-
ated are summarized in Table 5.1. These have
compared the model output with data collected
under natural rainfall (Quinton, 1997; Quinton &
Morgan, 1998; Folly et al ., 1999; Cai et al ., 2005;
Mati et al ., 2006) or against data collected with rain-
fall simulators (Veihe et al ., 2001; Cai et al ., 2005).
The majority of these evaluations have adopted the
common practice of splitting the dataset and using
half to train or calibrate the model, and the second
half of the dataset for evaluation. In this procedure
the model is parameterized on the basis of field
measurements, estimates and look-up tables in the
model's user guide (Morgan et al ., 1998b). Parameters
to which the model is sensitive are then changed so
that the model output matches the observed data
for the training datasets. Once an acceptable fit has
been achieved (or the best fit possible), these param-
eter values are then used as the basis for applying
the model to a second series of observed data either
from the same site or from an entirely different
location, to test the transferability of the model.
So why re-examine the model's performance at
this relatively mature stage of development? A
number of reasons for revisiting model evaluation
are discussed in Chapter 4, not least that iterative
improvements of both models and techniques to
evaluate models are vital if we are to develop bet-
ter predictions of soil erosion. There is also a grow-
ing understanding amongst many environmental
scientists that developing more robust analyses of
model predictions requires an evaluation of the
uncertainties involved in the modelling process
(Beven & Freer, 2001; Krueger et al ., 2007).
Approaches exist that address the difficulty of
both defining physically-based models of erosion
dynamics and evaluating their uncertainties. For
example, Tayfur et al . (2003) have explored the
simulation of erosion dynamics using fuzzy mod-
elling approaches. Others have chosen to concen-
trate on the uncertainties in the data directly, for
example using bootstrapping methods to assess
rating curve and sediment load dynamics, show-
ing considerable uncertainties in this information
(Rustomji & Prosser, 2001). In a series of papers,
Krueger et al . (2007, 2009, 2010) demonstrate the
importance of taking into account uncertainties in
hydrological measurements as well as those asso-
ciated with models. Hydrographs are not error-
free, and Krueger et al . (2009) demonstrate that
errors may be significant (see Section 5.4). In such
cases, quantifying uncertainties in data may be
important to define correctly the appropriate met-
ric (objective function or performance measure) to
assess the quality of model simulations. However,
characterizing measurement uncertainty in the
form of probability distributions, fuzzy numbers,
error intervals, or the like, requires extended meas-
urement efforts if the uncertainty estimates are to
bear any relation to actual properties in the field.
In this chapter we adopt the extended
Generalised Likelihood Uncertainty Estimation
(GLUE) methodology (Beven, 2006; see also sec-
tions 4.5 and 4.6) where the evaluation of the
model predictions takes into account both the
uncertainties in the model and the observed data.
Although the examination of model uncertainty
for highly parameterized hydrological models is
now becoming common in the hydrological lit-
erature, to our knowledge this marks one of only
a few attempts to vary a large number of param-
eters in a highly parameterized erosion model
(see Brazier et al . (2000), Hantush & Kalin (2005)
and Wei et al . (2008) for other examples). In this
study, varying all of EUROSEM's parameters was
still beyond our computational power, but such a
comprehensive treatment of parameter uncer-
tainties is not necessary given our understanding
of previously published parameter sensitivities.
Here, we take the next step towards our longer-
term objective of evaluating erosion models with
Search WWH ::




Custom Search