Image Processing Reference
In-Depth Information
Equation 8.6 can have different interpretations depending on the approach
used to derive it and the meaning given to the regularization term . All of
the following methods provide the same result under specific conditions [26,27]:
Bayesian methodology to maximize the posterior assuming Gaussian
prior on [28], Wiener estimator with proper and , Tikhonov regulariza-
tion to trade off the goodness of fit (Equation 8.5), and the regularization
term , which attempts to find the solution with weighted
by minimal second norm. All the frameworks lead to the solution of the next
general form
f ()
Q
p (
QX
|
)
Q
C ε
C S
f () (
QQWQ
Q
=
1
)
W −1
GGWGWGW
X
+
=
(
1
+
λ
1
)
1
1
(8.7)
Q
X
If and only if
W Q
and
W X
are positive definite [29] Equation 8.7 is equivalent to
GWGGWG W
Q
+
=
(
+
λ
)
1
(8.8)
Q
X
In the case when viable prior information about the source distribution is
available , it is easy to account for it by minimizing the deviation of the
solution, not from
Q p
0
(which constitutes the minimal second norm solution
G +
),
() (
QQ WQQ
Q
)
but from the prior
Q p
, i.e.,
f
Q
=
1
(
−.
))
Then Equation 8.6
p
p
will be minimized at
ˆ
QGX IGGQ Q GXGQ
=
+
+
(
+
)
=
+
+
(
)
(8.9)
p
p
p
For the noiseless case, with a weighted -norm regularizer, the Moore-Penrose
pseudo-inverse gives the inverse by avoiding the null space projections
of in the solution, thus providing a unique solution with a minimal second
norm .
Taking and constitutes the simplest regularized
minimum norm solution (Tikhonov regularization). Classically, is found using
cross-validation [30] or L-curve [31] techniques, to decide how much of the noise
power should be brought into the solution. Phillips et al. [32] suggested the iterative
method ReML, in which the conditional expectation of the source distribution and
the regularization parameters are estimated jointly. Additional constraints can be
applied for greater regularization, for instance, temporal smoothness [33].
As presented in Equation 8.8, can account for different features of the
source or data space by incorporating them correspondingly into
L 2
GG
+
=
G
GWGGWG
Q
=
1
WIWI
Q
(
)
Q
=,
=
Q p =
0
N
X
M
λ
G +
W Q
and
W X
.
Next, data-driven features are commonly used in EMSI:
accounts for any possible noise covariance structure or,
if is diagonal, it will scale the error terms according to the noise
level of each sensor.
WC
X
=
ε
C ε
WW C
Q
=
=
accounts for prior knowledge of the source's covari-
C
S
S
ance structure.
Search WWH ::




Custom Search