Environmental Engineering Reference
In-Depth Information
of the way in which system properties determine system
behaviour based partly on the mathematical equations
that determine y from x and partly from relevant initial
and boundary conditions, forcing functions and so forth.
Usually, f is implemented in the form of a computer
simulator. We suppose also that we have some system
field data z comprising observations made on the system
corresponding to some subvector of y .
There are twomain reasons for interest in such amodel.
First, we may want to gain insights into the general
behaviour of the model; for example, to assess which
features of the system properties are most important for
determining the system behaviour and how sensitive such
relationships are tomisspecification and other factors. If f
in Equation 26.1 is a new version of a pre-existing model,
then we will want to assess the form and magnitude of
the changes between versions. Similarly, we may want
to compare the model to other pre-existing models for
the same phenomenon. There are many ways to gain
such insights. One of the simplest, if the model is can
be evaluated quickly, is to make many evaluations of the
model at widely differing choices of input parameters and
to carry out a careful data analysis of the resulting joint
collections of input and output values (sometimes called
meta-modelling). In such an analysis, we will also look for
anomalous and counter-intuitive behaviour in the model
that may enable us to detect errors in the computer
simulator, namely features that are wrong in ways that we
are able to fix. These may be simple coding errors, data-
transcription errors, mistakes in our implementation of
numerical solvers or problems with the science used in
our problem formulation for which we can see ways to
formulate effective alternatives within the limitations of
time and resources that are available.
Secondly, when we have completed this analysis, we
then often pass to a further stage of using the model
to make inferences about specific physical systems; for
example, to help to understand actual water flow for
specified catchment areas. We will have much greater
confidence in our use of model predictions for an actual
system if we have a good intuitive feel for the general
behaviour of the model, and we have carried out a careful
error analysis for the code. In this chapter, we will focus
attention on this second stage, as it is natural to consider
model adequacy in the context of practical purposes for
which the model is to be used.
We therefore consider whether a model is adequate for
representing a given physical system for some specified
purpose. In all but the most elementary problems, the
behaviour of the model will not be precisely the same as
the behaviour of the system.
One reasonwhy this difference arises is becausewemust
simplify our description of the system properties, partly
because we cannot fully describe the science determining
the effect of systemproperties on systembehaviour, partly
because, even with the simplified science that we choose
to implement, we will typically need to approximate the
solution of the equations required to determine the rela-
tionships between system properties and behaviour and
partly because the forcing functions, initial conditions,
boundary conditions and so forth are rarely known with
certainty. This irresolvable difference between the output
of the model and the performance of the physical system
is often termed model discrepancy .
A crucial part of the assessment of model adequacy
comes fromassessing themagnitude ofmodel discrepancy
and then deciding whether it is so large that this renders
the model unfit for the intended uses. It is rare that we can
place a precise value on this discrepancy, as, otherwise,
we would have incorporated this assessment directly into
the model itself. We must therefore usually carry out an
uncertainty analysis. We take the view that the model
does not make deterministic predictions about system
behaviour but, rather, offers probabilistic predictions for
such behaviour. The level of uncertainty associated with
these predictions will determine whether the model is
adequate for the intended purposes.
The sources of uncertainty that we must usually deal
with are: (i) input uncertainty, as we are unsure as to
which is the appropriate value of the inputs at which
to evaluate the model, or even whether there is any
meaningful choice of input parameters; (ii) functional
uncertainty, as, for complex, slow-to-run models, there
will be large areas of the input space that will be explored
only very lightly; (iii) observational error, complicating
our ability to assess the quality of model fit to historical
field data; (iv) forcing-function, initial condition and
boundary-condition uncertainty; (v) general aspects of
model uncertainty, for example problems arising when
we train a model on data in one context but we intend to
use the model in a very different context. We may view
a model as adequate in principle if model discrepancy
is small. However, all sources of uncertainty should be
included in a composite uncertainty analysis, as themodel
will only be adequate in practice if we can control all
of the relevant sources of uncertainty to a level where
predictions are sufficiently accurate for the purpose in
hand.
Search WWH ::




Custom Search