Environmental Engineering Reference
In-Depth Information
There are different views as to what constitute appro-
priate formulations for an uncertainty analysis. We shall
describe our analysis from a Bayesian viewpoint. In this
view, all uncertainties may be expressed as best current
judgements in probabilistic form and then combined
with observational data by the usual probabilistic rules.
The advantage of this approach is that it places all of
the uncertainties in relating model behaviour to system
behaviour within a common framework and produces a
probabilistic assessment that represents the best current
judgements of the expert in a form which is appropriate
for use in subsequent decision analysis.
As with any other aspect of the modelling process, we
can make such a probabilistic assessment with different
degrees of detail and care. It may be enough to make
a rough order of magnitude assessment of the most
important aspects of model discrepancy or we may need
to carry out a more careful analysis. As a simple rule of
thumb, the more that we intend to rely on the model
to make decisions with important consequences, under
substantially different conditions to those for which we
have available historical data, for example, to extrapolate
over large time scales, then the more careful we will need
to be in our assessments of model discrepancy. We will
also be limited in our ability to make a full uncertainty
analysis by factors such as the dimension and complexity
of the model, the time that it takes to carry out a single
model evaluation, whether there are any other models
against which we may compare our analysis and the
nature and extent of any historical data which we may use
to assess the performance of the model (see discussion in
Chapter 2). In our account, we will introduce some basic
analyses that we may wish to carry out. The uncertainties
that we shall refer to may be assessed as variances, as full
probability distributions or as an uncertainty description
at some intermediate level of complexity. In our example
analyses, we will illustrate some particular forms that such
calculations might take.
There are two basic aspects to model discrepancy. First,
we may assess intrinsic limitations to the model whose
order of magnitude we may quantify by direct computer
experimentation. We refer to these as internal model dis-
crepancies and quantify them by analysis of the computer
output itself. There are two general types of internal dis-
crepancy. The first type is due to lack of precise knowledge
of the values of certain quantities which are required in
order to evaluate the model but which it is inappropriate
to treat as part of the model-input specification x .For
example, if we judge that the elements of the forcing
function for the system are only determined within, say,
10%, then we may assess the effect on the output of
the model of making a series of model evaluations with
varying values of the forcing function within the specified
limits. The second type of internal discrepancy is due
to acknowledged limitations in the ways in which the
model equations transform system properties into system
behaviour. For example, a common practical modelling
structure is to determine a spatio-temporal series of sys-
tem responses by propagating a state equation across
time and space. Each propagation step involves a level of
approximation. Provided that we have access to the gov-
erning equations of the model, we can directly assess the
cumulative effect of such approximations by introducing
an element of uncertainty directly into the propagation
step in the equations for the system state, reimposing
system constraints as necessary after propagation, and
making a series of evaluations of the model based on
simulating the variation in overall system behaviour with
differing levels of propagation uncertainty.
The second aspect of model discrepancy concerns all
of those aspects of the difference between the model and
the physical system which arise from features that we
cannot directly quantify by operations on the computer
model. We refer to such aspects as external model dis-
crepancies. Some external discrepancies may correspond
to features that we acknowledge to be missing from the
model and whose order of magnitude we may consider
directly, at least by thought experiments. However, our
basic means of learning about the magnitude of many
aspects of external discrepancy is by comparing model
outputs to historical field data. The difference between
the historical field observations z on the system and the
corresponding model outputs f ( x ), when evaluated at
the appropriate choice of inputs to represent the sys-
tem properties, is the sum of the observational error
and the internal and external model discrepancy errors.
Provided that we have already quantified uncertainty for
observational and internal model error, any further lack
of fit is due to external model error, and the magni-
tude of such mismatch between model output and field
data is therefore a guide to external model uncertainty,
for historical outcomes. The extent to which this may
be considered informative for such uncertainties when
using the model to forecast future outcomes is a matter
of scientific judgement dependent on the context of the
problem in question.
In practice, we usually do not know the appropriate
choices of inputs at which to evaluate the model, as
Search WWH ::




Custom Search