Environmental Engineering Reference
In-Depth Information
should be used for model evaluation. graphical tech-
niques for streamflow simulations should include hydro-
graphs and percent exceedance probability curves (e.g.,
flow-duration curves), and the quantitative statistics:
NSE, Bias, and RSR. Performance ratings for the rec-
ommended statistics have been suggested by Moriasi
et al. (2007), where model simulations can be judged as
“satisfactory” if NSE > 0.50 and RSR < 0.70, and if Bias
is within the ranges of ±25% for streamflow, ±55% for
sediment, ±70% for nutrients (N and P) for measured
data of typical uncertainty.
Traditionally, the correlation coefficient and SE have
been used to measure the goodness of fit in calibrating
hydrologic models (McCuen et al., 2006). Kim et al.
(2007) used the coefficient of determination ( R 2 ), coef-
ficient of efficiency ( E ), and RMSE to measure the per-
formance of the HSPF model (Bicknell et al., 2001) in
simulating flows in the North River in Virginia.
automatic calibration methods is their underlying
assumption that the available model structure is correct,
leading to the elusive goal of finding a unique optimal
parameter set (gupta et al., 2003a). Typically, the close-
ness of the model output to observations is measured
by an objective function, and there are typically large
regions of feasible parameter space for which the objec-
tive function values are very similar. This observation
has been taken as evidence of equifinality among
models, which refers to a condition in which the avail-
able data is insufficient to distinguish between compet-
ing parameter sets. In some cases, the insensitivity of
the objective function to parameter values has been
taken as evidence of models that are too complex in
relation to the information content of the data, leading
to the assertion that the model is overparameterized
(gupta et al., 2003a). A major weakness in automatic
calibration is the dependence of the identification
process on a single objective function that no matter
how carefully chosen, is often inadequate to properly
measure all of the characteristics of the observed data
deemed to be important. In blunt terms, single-objective
calibration methods do not usually provide parameter
estimates that are considered acceptable by practicing
hydrologists (Vrugt et al., 2003). The dependence of
automatic calibration methods on a single objective
function is in contrast to the manual calibration process,
which typically uses a number of complementary ways
of evaluating model performance. In some cases, auto-
matic calibration is used to fine tune a parameter
set after manual calibration is complete. In many
cases, where novel automatic calibration methods are
developed, manual calibration is used to check the
reality of the selected model outcomes associated with
the parameter set selected by automatic calibration
(e.g., Turcotte et al., 2003).
Automatic calibration methods that are commonly
used include simulated annealing, genetic algorithm,
and shuffled complex evolution (Vrugt et al., 2003).
Additional methods have been proposed for updating
model parameters in real time as new data are collected
(e.g., Thiemann et al., 2001) and for matching the power
spectrum of measurements rather than actual measure-
ments (Montanari and Toth, 2007), where ther latter
method is mostly applicable in cases where measure-
ments are sparse. The theory and practice of automatic
calibration continues to evolve, and improved methods
are continually emerging. It is again emphasized here
that any automatic calibration method, no matter how
advanced, gives only the optimal solution with respect
to the objective function used in a specific case. Typi-
cally, the objection function cannot capture all features
of the output adequately and is highly impacted by
errors in the calibration data. Further, inadequate model
11.3.3 Parameter Estimation
Parameter adjustment to identify the optimal parame-
ter set can be done either manually or automatically. In
the manual process, simulated and observed output
responses are compared, and an incremental trial-and-
error process of parameter adjustments is attempted
(within the feasible parameter space) to get the simu-
lated response to approach more closely the observed
watershed response. The manual process is typically
complicated by the large number of parameters that
must be adjusted, the parameters have either similar or
compensating (interacting) effects on the model output,
there is no uniquely unambiguous way of evaluating the
closeness of the predicted and observed output, and the
input data, model conceptualization, and output data
are all to some extent uncertain. In spite of these chal-
lenges, major modeling agencies, such as the National
Weather Service (NWS), which is responsible for devel-
oping river forecasting models for approximately 4000
locations in the United States, see greater effectiveness
in manual calibration techniques compared with auto-
matic calibration (Smith et al., 2003). The NWS ratio-
nale in preferring manual calibration is that the manual
calibration process allows the modeler to develop a
much deeper understanding of the data and models and
their limitations. In spite of this assertion, automatic
calibration schemes have been shown to produce
improved calibrations of NWS models that are obtained
by manual calibration (Hogue et al., 2003).
Automatic calibration methods attempt to automate
the calibration process. Automatic calibration proce-
dures have met with good success in groundwater
models and limited success in surface water and water-
quality models. The major problem with traditional
Search WWH ::




Custom Search