Biomedical Engineering Reference
In-Depth Information
4.2.3
Implicit Denoising by Regularization
To address the problem of noise amplification (Sect. 4.2.2.1 ), most deconvolution
approaches adopt a strategy to reduce the noise. For example, in the iterative
algorithms mentioned in the previous section, as the number of iterations increases,
ideally, the images should appear sharper until the final solution is reached.
However, in practical situations, the algorithm is terminated before divergence of
the solution or amplification of the noise. There is thus a compromise to be made
between the desired sharpness of the image (or high frequencies to be restored)
and the amount of noise amplified. This happens because the algorithm is unable to
find a stable noise-free solution. This can be overcome by introducing a smoothness
constraint into the problem, as a priori, that causes the algorithm to stabilize.
Another approach is to adopt the Tikhononv methodology. In the 1960s, Tikhonov
laid down the theoretical basis of modern inversion methods by introducing the
concept of regularized solutions. That is, we only search a solution near the
observations, as
|| 2 2 , and within this set of solutions
we search for a smooth solution, for example by jointly minimizing
||i (
x
)
− h (
x
)
∗ o (
x
)
|| 2 .
Tikhonov formalizes this trade-off between fidelity to the data and regularity by
defining regularized solutions as those that minimize a joint criterion. He showed
that the problem becomes well-posed, if it is reformulated using this joint criterion.
The encoding of uncertain or partial prior information can be envisaged within the
variational framework (see for example [ 3 ] for a review), or within the following
Bayesian probabilistic framework as we do below. Accordingly, the posterior
probability is
||∇
( o )
Pr( o|i )= Pr( i|o )Pr( o )
Pr( i )
,
(4.19)
where Pr( o ) is a p.d.f (the prior) from which o is assumed to be obtained.
By using the Bayesian formula in Eq. ( 4.19 ), a rigorous statistical interpretation
of regularization immediately follows. o is obtained by using the Maximum a
posteriori (MAP) estimate or by minimizing the negative logarithm of the a
posteriori as
o (
x
) = argmax
o≥ 0
Pr( o|i ) = argmin
o≥ 0
(
log[Pr( o|i )]) .
(4.20)
As Pr( i ) does not depend on o or h , it can be considered as a normalizing constant,
and it will hereafter be excluded from the estimation procedure. The minimization of
the negative logarithm of Pr( o|i ) in Eq. ( 4.20 ) can be rewritten as the minimization
of the following joint energy functional:
J ( o ( x )) = J obs ( o ( x )) + J reg ( o ( x )) ,
(4.21)
Search WWH ::




Custom Search