Digital Signal Processing Reference
In-Depth Information
the wavelet and curvelet transforms. This necessitates, first, the estimation of the
standard deviations of the noise in the different subbands of both transforms; see
the case of colored Gaussian noise in Section 6.2.1.1. As h is generally low pass,
the traditional wavelet basis does not diagonalize the covariance of the noise after
inverting H . For this purpose, Khalifa et al. (2003) introduced mirror wavelets as
particular wavelet packets that subdecompose the finest wavelet scale. It is recom-
mended to use them as they may produce better results in practice.
An iterative deconvolution algorithm is more general and can be applied with-
out stringent restrictions on h . For the sake of simplicity, we assume that the noise
ε ∼N
(0
2
ε
). Again, this approach begins by computing K multiresolution supports
(
M k ) 1 k K obtained from the significant coefficients in each T k y . For other noise
models, for example, Poisson (see Section 6.4), the multiresolution supports are
computed on the stabilized observations using the appropriate variance-stabilizing
transform (Anscombe for Poisson noise). Let M k be the binary (nonsquare) matrix
that extracts the coefficients indexed in
M k from any coefficient vector.
With the multiresolution supports to hand, the combined deconvolution ap-
proach seeks a solution to the following optimization problem (Starck et al. 2003c):
p
TV p ,
min
x
∈C
x
(8.13)
N M k T k ( y
H x ) 2 σ
K
k
where
TV p is the
p th norm of the discrete gradient of x . The discrete total variation seminorm (6.21)
obviously corresponds to p
C =
[0
, +∞
) N
(
{
x
∈ R
}
) and
x
k
=
1
1
while dealing with a strictly convex and differentiable functional. The regularization
in equation (8.13) promotes piecewise smooth candidate solutions. The constraint
imposes fidelity to the data, or more exactly, on the significant coefficients of the
data, obtained by the different transforms. Nonsignificant (i.e., due to noise) coeffi-
cients are excluded, hence avoiding noise amplification in the solution.
=
1. Here we took p
=
1
.
1 to approach the case p
=
8.4.2 Combined Deconvolution Algorithm
To solve (8.13), and if one can afford to subiterate to compute the projector on the
constraint, the splitting framework of Chapter 7 can be used advantageously. Other-
wise, one can think of using the HSD algorithm as for denoising. However, because
of the convolution operator H , it turns out that the computation of a nonexpansive
operator associated with the constraint on the multiresolution support having the
proper fixed point set is difficult without subiterating.
We then turn to another alternative by relaxing the multiresolution constraints
in equation (8.13) into an augmented Lagrangian form:
K
M k T k ( y
H x )
1
2
2
2 + λ
p
TV p ,
min
x
(8.14)
) N
x
[0
, +∞
k =
1
p
TV p . Its gradient with respect
where
λ
is a regularization parameter. Let F ( x )
=
x
to x is
F ( x )
=
(
div
G
◦∇
( x )) 1 i N ,
(8.15)
 
Search WWH ::




Custom Search