Geoscience Reference
In-Depth Information
observations. Even then, however, because of the epistemic nature of the many sources of error in the
modelling process, there may still be surprises in making predictions outside of the calibration data set.
7.2.2 Assessing Parameter Sensitivity
The efficiency of parameter calibration would clearly be enhanced if it was possible to concentrate
the effort on those parameters to which the model simulation results are most sensitive. This requires an
approach to assessing parameter sensitivity within a complex model structure. Sensitivity can be assessed
with respect to both predicted variables (such as peak discharges, discharge volume, water table levels,
snowmelt rates, etc.) or with respect to some performance measure (see Section 7.3). Both can be thought
of in terms of their respective response surfaces in the parameter space. One definition of the sensitivity
of the model simulation results to a particular parameter is the local gradient of the response surface in
the direction of the chosen parameter axis. This can be used to define a normalised sensitivity index of
the form:
dQ/dx i
Q/x i
S i =
(7.1)
where S i is the sensitivity index with respect to parameter i with value x i and Q is the value of the
variable or performance measure at that point in the parameter space (see, for example, McCuen, 1973).
The gradient will be evaluated locally, given values of the other parameters, either analytically for simple
models or numerically by a finite difference, i.e. by evaluating the change in Q as x i is changed by a small
amount (say 1%). Thus, since the simulation results depend on all the parameters, the sensitivity S i for
any particular parameter i will tend to vary through the parameter space (as illustrated by the changing
gradients for the simple cases in Figure 7.2). Because of this, sensitivities are normally evaluated in the
immediate region of a best estimate parameter set or an identified optimum parameter set after a model
calibration exercise.
This is, however, a very local estimate of sensitivity in the parameter space. A more global estimate
might give a more generally useful indication of the importance of a parameter within the model structure.
There are a number of global sensitivity analysis techniques available. The method proposed by van
Griensven et al. (2006) extends the local sensitivity approach by averaging over a Latin Hypercube
sample of points in the parameter space, consistent with prior estimates of the joint distribution of the
parameter values, but a simple average might be misleading. A technique that makes minimal assumptions
about the shapes of the response surface is variously known as Generalised Sensitivity Analysis (GSA),
regionalised sensitivity anlysis (RSA) or the Hornberger-Spear-Young (HSY) method (Hornberger and
Spear, 1981; Young, 1983; Beck, 1987) which was a precursor of the GLUE methodology described in
Section 7.10. The HSY method is based on Monte Carlo simulation , which makes many different runs of
a model with each run using a randomly chosen parameter set. In the HSY method, the parameter values
are chosen from uniform distributions spanning specified ranges for each parameter. The ranges should
reflect the feasible parameter values in a particular application. The idea is to obtain a sample of model
simulations from throughout the feasible parameter space. The simulations are classified in some way
into those that are considered behavioural and those that are considered nonbehavioural in respect of the
system being studied. Behavioural simulations might be those with a high value of a certain variable or
performance measure, nonbehavioural those with a low value.
HSY sensitivity analysis then looks for differences between the behavioural and nonbehavioural sets
for each parameter. It does so by comparing the cumulative distribution of that parameter in each set.
Where there is a strong difference between the two distributions for a parameter, it may be concluded
that the simulations are sensitive to that parameter (Figure 7.3b). Where the two distributions are very
similar, it may be concluded that the simulations are not very sensitive to that parameters (Figure 7.3c). A
quantitative measure of the difference between the distributions can be calculated using the nonparametric
Search WWH ::




Custom Search