Biomedical Engineering Reference
In-Depth Information
Tabl e 4. 1 Examples of deconvolution algorithms from the literature of microscopy, classified by
the type of noise model and methodology
No noise
Gaussian noise
Poisson noise
- Nearest neighbors [ 1 ]
- No neighbors [ 50 ]
- Inverse filter [ 29 ]
Closed-form solutions
- Tikhonov [ 79 ]
- Regularized linear least
squares [ 60 ]
- Wiener filter [ 80 ]
Iterative solutions
-
- Maximum likelihood (ML) [ 36 ]
- Maximum a posteriori (MAP)
[ 23 , 31 , 82 , 83 ]
Jansson
van
Cittert
(JVC) [ 1 ]
- Nonlinear least squares
(NLS) [ 15 ]
where the gradient is null. Accordingly, the estimate of the function o is given as
o ( x )=( h ∗ h ) 1 ( x ) ( h ∗ i )( x ). This estimation method and other inversion
techniques such as the inverse/pseudo-inverse filters are fast because they have a
closed form solution for the estimate o ( x ). Though, they intrinsically assume that
the noise is AWGN and are thus valid only for images with large photon counts. As
an exact PSF inverse does not exist for CLSM, and the OTF has only a finite region
of support, these algorithms have difficulty in restoring the information beyond a
certain cut off frequency and cannot be used to extend the reconstruction to non-
measured frequencies. On the other hand, gradient-based iterative algorithms can
produce negative intensities during successive iterations even if the initial estimate
is positive. For gradient-based algorithms, we have noticed a relative lowering of
the contrast in the estimates, in addition to noise amplification.
4.2.2.3
Multiplicative Richardson-Lucy Algorithm
The principal idea behind the multiplicative Richardson-Lucy (RL) algorithm [ 44 ,
62 ] is to maximize the likelihood Pr( i|o ), given that the statistics follows a Poisson
distribution. As the intensities of the individual voxels i (
) are conditionally
independent, we can say that the overall likelihood is a multiplication of the
individual likelihoods at each voxel. That is
x
Pr( i|o )=
x ∈Ω s
)) i ( x ) exp(
(( h ∗ o )(
x
)+ b (
x
( h ∗ o )(
x
)+ b (
x
))
.
(4.15)
i (
x
)!
The mean of the above Poisson process is ( h ∗ o )(
x
)+ b (
x
). Using the idea
of the negative logarithm as in the previous case, the data energy function to be
minimized is
)) =
x
J obs ( o (
x
(( h ∗ o )(
x
)+ b (
x
− i (
x
)log(( h ∗ o )(
x
)+ b (
x
)) . (4.16)
))
Ω s
 
Search WWH ::




Custom Search