Environmental Engineering Reference
In-Depth Information
that the likelihood of the model describing the system
behavior is practically zero. In this case, a preferred
likelihood measure is (Arabi et al., 2007)
cases of multiple performance measures, several alter-
native approaches have been used. For example, fuzzy
logic approaches to combine individual likelihoods (e.g.,
Jia and Culver, 2008), a composite likelihood based on
a weighted combination of individual likelihoods
(Blasone et al., 2008), and combined likelihoods in
which all performance measures must be above a
defined threshold value for the likelihood function to
be nonzero (Freer et al., 2003).
2
σ
σ
ε
L (
q|
x y
,
)
=
max
1
,
0
(11.48)
2
0
For each simulation of a randomly selected para-
meter set, a likelihood weight can be obtained from
Equation (11.48); these weights are then rescaled by
dividing each of them by their total sum. This yields
the result
11.3.3.4  Other  Methods.  In cases where the data
record for calibration of rainfall runoff models is short
(<2 years), calibration by conventional methods can be
highly uncertain and suffer from overparameterization.
To address this problem, Perrin et al. (2008) proposed a
method that simply selects a model parameter set
obtained from calibrated parameter sets on other catch-
ments. The parameter set that best matches the observed
data is selected. Although this so-called discrete param-
eterization method is not as efficient as a classical cali-
bration approachs when long time series are available
for calibration, it provides more robust parameter sets
when flow time series available for calibration is shorter
than 2 years.
M
q |
L
(
x y
,
)
=
1
(11.49)
m
m
=
1
where M is the number of parameter sets used in deter-
mining the likelihood function. Using the likelihood
function, L m
q |
x y , the model prediction quantiles at
each time step t can be empirically calculated using the
relation
(
,
)
[
] =
(
)
ˆ
q |
P y
y
L m
x y
,
(11.50)
y y
ˆ
where y represents a possible value of the estimated
output variable y at time t . The prediction limits defined
by the quantiles p /2 and 1 − p /2, where p is a number
between 0 and 1, are often called the 1 − p [gLUE]
uncertainty bounds.
The major criticisms of the gLUE method are related
to its subjectivity and inconsistency with fundamental
probability theory (Mantovan and Todini, 2006; Ste-
dinger et al., 2008). Specifically, the gLUE method is
subjective in the choice of likelihood function, feasible
ranges of the model parameters, sampling strategy, defi-
nition of the prior likelihood distribution of the param-
eter vector, and definition of the evaluation period. A
major drawback of the gLUE method is also the large
number of Monte Carlo simulations that are required
to sample the feasible parameters space; however, alter-
native sampling techniques, such as the MCMC tech-
nique, can significantly improve the efficiency of the
gLUE method (Blasone et al., 2008). New variants of
the gLUE approach have been proposed to address
some of the shortcomings in the method, but several of
these variants are yet to find widespread use in water-
resources applications (e.g., Jacquin and Shamseldin,
2007).
A variety of likelihood functions other than the
Nash-Sutcliffe measure (Eq. 11.48) have been used with
the gLUE approach. For example, the sum of errors
and the sum of errors squared have also been used in
cases where there is a single performance measure. In
11.4 VALIDATION
Validation is a process to evaluate the accuracy, uncer-
tainty, and bias in calibrated model predictions. Alter-
natively, model validation can be defined as the process
of demonstrating that a model reflects the behavior of
the real world. The validation process is meant to ensure
that the model accurately represents the physical, chem-
ical, and biological processes and responses of the study
site. Validation differs from calibration in two essential
ways: model parameters are not adjusted during valida-
tion, and performance of the model is validated using a
data set different from the training set used in calibra-
tion. The accuracy of the model is evaluated against the
observed data subset during the validation phase using
similar statistical and graphical techniques to those used
in calibration. If the spatial scale of the simulation
covers a large region, a rigorous evaluation of model
performance should include evaluation against multiple
field processes. At coarser scales, comparisons between
model predictions and experimental observations may
be difficult, since model output at coarser scales are less
likely to involve measurable quantities, and the effects
of spatial and temporal variability on model predictions
may be much greater at coarser scales that at finer
scales. Models are said to be validated if their accuracy
and predictive capability in the validation period/area
are shown to lie within the predefined acceptable limits.
Search WWH ::




Custom Search