Geoscience Reference
In-Depth Information
The regularization by Tikhonov is easy to link with the considered in the
previous section method of penalty functions. Indeed, if there are conditions
x k
=
0 then the solution with the penalty functions method (4.28) converts
directly to (4.45). As the rigorous equality x k
=
0 is not succeeded, the factor h is
selected as small as possible. Thus, the regularization by Tikhonov corresponds
with imposing the definite constraint on the solution, namely the requirement
of the minimal distance between zero and the solution, i. e. the reduction
of the set of the possible solutions of the inverse problem. Theoretically, all
regularization approaches are reduced to imposing the definite constraint
on the solution. Requirement x k
=
0 means that the components of vector X
should not differ greatly fromeach other, i. e. it aborts the possibility of strongly
oscillating solutions. However in fact, it is the way to diminish the strong spread
of solutions during the iterations of nonlinear problems. Actually, nowadays the
regularization by Tikhonov is applied to all standard algorithms of nonlinear
LST (see for example Box and Jenkins 1970).
All desired parameters X in the considered statement of the atmospheric
optics inverse problems have physical meaning. Hence, definite information
about them is known before the accomplishment of observations Y , and it is
called an a priori information . Assuming that parameters X are characterized
by a priori mean value X and by a priori covariance matrix D . Suppose that the
parameters uncertainties obey Gauss distribution, i. e.:
2 exp 1
2 ( X X ) + D −1 ( X X ) .
1
ρ
=
( X )
π
|
|
) N
2
1
(2
|
D
|
We should point out that mentioned a priori characteristics X and D are the
information about the parameters known in advance without considering the
observations, in particular, it relates also to an a priori SD of parameters X .
Accounting for the above-obtained probability density of the observational
uncertainties
ρ
( Y , X ), and supposing the absence of correlation between the
uncertainties of the observations and desired parameters, the criterion of
the maximal likelihood is required for their joint density
ρ
ρ
( X ). For
convenience difference X X is considered as an independent variable. The
following can be inferred after the manipulations analogous to the derivation
of (4.43):
( Y , X )
X +( G + S Y G + D −1 ) −1 G + S Y ( Y G 0 GX ) .
=
X
(4.46)
Solution (4.46) is known as a statistical regularization method (Westwater and
Strand 1968; Rodgers 1976; Kozlov 2000). The regularization is reached here
by adding inverse covariance a priori matrix D −1 to the matrix of the equation
system. Indeed, it is easy to test that solution (4.46) exists even in the worst case
G + S Y G
=
0. On the other hand the larger the a priori SDof parameters, the less
the yield of matrix D −1 to (4.46) and in the limit, when D −1
=
0, solution (4.46)
converts to solution without regularization (4.43). Statistical regularization
(4.46) is much more convenient than (4.45), which is because it requires no
iteration selection of parameter h (though it requires a priori information),
Search WWH ::




Custom Search