Geoscience Reference
In-Depth Information
where we have defined the resolution operator as
R
maximum-likelihood model
m
,wefindthat
S
is
SG
. It is more instructive to introduce the
target model
m
explicitly, and we find that
=
determined by
G
T
G
I
−
1
σ
d
σ
m
G
T
,
S
=
+
(11.8)
˜
m
−
m
=
(
I
−
R
)(
m
0
−
m
)
+
Se
,
(11.4)
with the identity matrix
I
. The last equation
clearly shows that our estimated model
where we assumed, for simplicity, that the co-
variance matrices are diagonal, i.e.
C
d
=
˜
m
devi-
ates from the target model
m
by two terms. The
first one is due to imperfect resolution, meaning
that the resolution matrix is not equal to the
identity matrix (
R
σ
d
I
and
C
m
=
σ
m
I
. The symbol
σ
d
denotes the standard
deviation of the data uncertainty, and
σ
m
is the
standard deviation of the prior model range. The
posterior model covariance is then simply
I
). The second term is the re-
sult of data errors propagating into the estimated
solution.
There is considerable freedom and choice in-
volved in the construction of the approximate
inverse operator
S
(e.g., Parker, 1994; Tarantola,
2005). The most general approach starts by assign-
ing a probability density
σ
M
(m) to each model, i.e.
=
C
m
=
σ
d
G
T
G
σ
m
I
−
1
σ
d
+
,
(11.9)
where we note that such explicit expressions can
only be obtained on the basis of Gaussian statis-
tics. Equation (11.8) reveals a dilemma in the
solution of inverse problems: For most realis-
tic applications, the matrix (
G
T
G
=
σ
M
(
m
)
kρ
M
(
m
)
L
(
m
) ,
(11.5)
σ
2
d
σ
m
I
)isbadly
conditioned or not invertible at all, unless the
ratio
+
where
ρ
M
is the prior distribution in the model
space,
L
the likelihood function which measures
howwell the model explains the data within their
uncertainty, and
k
a normalizing constant (e.g.,
Tarantola, 2005). Assuming that both, data uncer-
tainty and our prior knowledge, can adequately
be described by Gaussian distributions, Equation
(11.5) takes the form
σ
2
d
σ
m
is artificially increased. In this case,
the initial variances are used to regularize the
inversion - and not to objectively quantify data
errors and prior knowledge, as it was originally
intended. Decreasing
σ
m
for the purpose of regu-
larization also reduces the posterior covariance,
therefore providing an unrealistically optimistic
estimate of the errors in our inferred model
=
ke
−
2
m
.
Expression (11.8) nicely reveals the effect that reg-
ularization has on the estimated model
˜
χ
(
m
)
,
σ
M
(
m
)
(11.6)
m
.Alarge
data uncertainty (
σ
d
large) and a narrow search
around the prior model (
σ
m
small), result in a
small approximate inverse
S
, and hence little up-
date of
m
0
. This explains why most tomographic
inversions recover only a fraction of the ampli-
tudes of the actual heterogeneities. Furthermore,
the regularization employed in the construction
of
S
reduces the resolution, because
R
˜
with the misfit functional
Gm
)
T
C
−
1
d
χ
(
m
)
=
(
d
−
(
d
−
Gm
)
m
0
)
T
C
−
m
(
m
+
−
−
(
m
m
0
)
.
(11.7)
The superscript
T
denotes vector transposition,
and
C
d
and
C
m
are the covariance matrices in
the data and model space, respectively. On the
basis of Equations (11.6) and (11.7) we define our
estimator
SG
.
Moreover, as seen from Equation (11.4), regular-
ization acts as a trade-off between the error prop-
agation and the imperfect resolution. For a strong
regularization,
Se
is small and (
I
=
˜
m
as the maximum-likelihood model,
i.e. the model that maximizes (11.6) and mini-
mizes (11.7). Requiring that the derivative of
χ
with respect to
m
vanishes at the position of the
m
)is
large and vice versa. The knowledge of both terms
−
R
)(
m
0
−