Geoscience Reference
In-Depth Information
7.10.2 Deciding on Feasible Parameter Ranges
Although deciding on feasible parameter ranges is analogous to defining the prior distributions for the type
of forward uncertainty estimation of Section 7.5 or the Bayes priors of Section 7.7, it is not necessarily
easy, even given some experience of previous applications of a model. The aim is to have a parameter
space wide enough that good fits of the model are not excluded, but not so wide that the parameter values
have no sense or meaning. It is often found, however, that even if the ranges are drawn quite wide, good
fits are found right up to the boundary of some parameters (as in Figure 7.8a). This may be because the
model predictions are not very sensitive to those parameters, or it may be that the range has not been
drawn wide enough since it implies that there will be good fits beyond the edge of the range. The best
suggestion is to start with quite wide ranges and see if they can be narrowed down after an initial sampling
of the parameter space.
There may, of course, be some prior information about parameters. This information may take a number
of forms. The first would be some sense of expected distribution and covariance of the parameter values.
Some parameter sets, within the specified ranges, may be known a priori as not being feasible on the
basis of past performance or mechanistic arguments. Then each parameter set could still be formed by
uniformly sampling the parameter space but could be given a prior likelihood (perhaps of zero). If the
prior likelihood is zero, it will not be necessary to run the model - such a model is considered as infeasible.
An interesting question arises when there are measured values available for one, some or all parameter
values in the model. In some (rare) cases, it may even be possible to specify distributions and covari-
ances for the parameter values on the basis of measurements. These could then be used to specify prior
likelihood weights in the (uniformly) sampled parameter space. Although it is often the case that such
measurements are the best information that we have about parameter values, there is, however, no guaran-
tee that the values measured at one scale will reflect the effective values required in the model to achieve
satisfactory functional prediction of observed variables. As with observed and predicted variables, the
measured and effective values of a parameter may be incommensurate. It might then be possible to feed
disinformation into the prior parameter distributions but, if the parameter space is sampled widely enough
to include suitable effective parameter values, the repeated application of Bayes equation or some other
way of combining likelihood measures (see Box 7.2) should result in the performance of the model in-
creasingly dominating the shape of the response surface relative to the initial prior estimates of parameter
distributions.
7.10.3 Deciding on a Sampling Strategy
The choice of a sampling strategy may be very important: if a large number of parameters are included
in the analysis, a very large number of model runs is required to define the form of the response surface
adequately in a high-dimension parameter space. The idea of using randomly chosen parameter sets is to at
least get a large sample from this space. In most of the applications of GLUE to date, a uniform independent
sampling of parameters in the parameter space has been used. This ensures the prior independence of the
parameter sets before their evaluation using the chosen likelihood measure and is very easy to implement
but can be a relatively inefficient strategy if large areas of the parameter space result in nonbehavioural
simulations.
The computational expense of making many thousands of simulations so that an adequate definition of
the response surface is obtained is the major reason why Monte Carlo methods have not been more widely
used in hydrological modelling. The greater the number of parameters and the greater the complexity
of the response surface, then the greater the number of simulations that are required. This constraint is
becoming less limiting, at least for relatively simple models, as computer power continues to increase and
prices continue to fall. The recent development of low-cost, Ethernet-linked, parallel PC systems using
off-the-shelf boxes will mean that Monte Carlo simulations will become increasingly feasible in both
Search WWH ::




Custom Search