Geoscience Reference
In-Depth Information
an exponential transformation of an error measure
E
since for
j =
1
,
2
,...,m
periods with
exp E j :
L j
L p (
M
(
i )
| Y
)
=
exp (
E 1 ) exp (
E 2 )
...
exp (
E m )
=
exp (
E 1 + E 2 + ...E m )
(B7.2.3)
However, Bayes equation is not the only way of combining likelihoods. A simple weighted
addition might be considered more appropriate in some cases such that for
m
different
measures:
W o L o (
M
(
i
))
+ W 1 L
(
M
(
i
)
| Y 1 )
+ ...W m L
(
M
(
i
)
| Y m )
L p (
M
(
i
)
| Y
)
=
(B7.2.4)
C
where
Y m are different evaluation data sets. Aweighted addition
of this formhas the effect of averaging over any zero likelihood values for any individual periods.
Again
W o to
W m areweights and
Y 1 to
is calculated to ensure that the cumulative posterior likelihoods sum to one.
Further forms of combination come from fuzzy set theory. Fuzzy operations might be con-
sidered appropriate for the type of fuzzy likelihood measures introduced in Box 7.1. A fuzzy
union of several measures is effectively the maximum value of any of the measures, so that
C
L o (
M
(
i
))
L 1 (
M
(
i
)
| Y
)
...L 1 (
M
(
i
)
| Y
)
L p (
M
(
i
)
| Y
)
=
C
max L o (
)
M
(
i ))
,L 1 (
M
(
i )
| Y
)
,...L 1 (
M
(
i )
| Y
=
C
The fuzzy intersection of a set of measures is the minimum value of any of the measures:
L o (
M
(
i
))
L 1 (
M
(
i
)
| Y
)
...L 1 (
M
(
i
)
| Y
)
L p (
M
(
i
)
| Y
)
=
C
min L o (
)
M
(
i ))
,L 1 (
M
(
i )
| Y
)
,...L 1 (
M
(
i )
| Y
=
C
Thus, taking a fuzzy union emphasises the best performance of each model or parameter
set over all the measures considered; taking a fuzzy intersection emphasises the worst perfor-
mance. In particular if any of the measures is zero, taking a fuzzy intersection leads to the
rejection of that model as nonbehavioural. All of these possibilities for combining likelihood
measures are included in the GLUE software (see Appendix A).
Box 7.3 Defining the Shape of a Response or Likelihood Surface
In estimating the uncertainty associated with a set of model predictions, it is generally necessary
to evaluate the outputs associated with different sets of parameter values and their associated
likelihood weight. In a forward uncertainty analysis, the likelihood weights are known from
the definition of the prior distributions of the parameters and it is relatively easy to define
an efficient sampling scheme. When the likelihood weights need to be determined from a
conditioning process of comparing predicted and observed variables, however, it is not known
exactly where in the parameter space the areas of high likelihood might be, even if in a
Bayesian analysis some prior likelihoods have been specified. This gives rise to a sampling
problem, since there is no point in repeatedly sampling areas of low likelihood in the space. A
simple random Monte Carlo sampling technique might then be very inefficient, especially in
high-dimensional spaces where areas of high likelihood are very localised. Ideally, therefore
Search WWH ::




Custom Search