Digital Signal Processing Reference
In-Depth Information
constructed from the thresholded orthogonal wavelet coefficients of translated ver-
sions of the original signal. However, this simple solution is not always relevant as it
weights equally both low- and high-quality estimates. In fact, the question is deeper
than that: suppose that we are given a sequence of estimates, each from a sparse
representation explaining the signal differently. Is there an effective way to merge
these estimates that leads to a better estimate by taking benefit of each transform?
Answering this question is difficult, in general. The problem amounts to figuring out
a set of weights that are constructed in such a way as to optimize the performance
of the resulting estimator (in equation (8.7), all estimates are equiweighted with
1
K ). An interesting solution was proposed by Fadili et al. (2007), who deployed a
data-adaptive Bayesian framework to optimally combine the individual closed-form
minimum mean squares estimators (MMSE). The idea is to weight each estimate x k
at each sample according to the significance that the atoms in
/
k have in synthesiz-
ing x k at that sample. This procedure can, however, be time consuming in practice.
A rigorous statistical framework that studies these weights and the properties of the
resulting estimators is known as aggregation estimators . Aggregation estimators with
exponential weights and sparse representations have been studied (Dalalyan and
Tsybakov 2008, 2009; Juditsky et al. 2008, and references therein). Sparsity oracle
inequalities and other optimality properties of the estimators have been established.
However, practical algorithms for large-scale data are still lacking. Recently, Elad
and Yavneh (2009) started working in this direction and proposed a randomized
orthogonal matching pursuit (OMP) algorithm to obtain a collection of competing
representations, and those are averaged to lead to better denoising performance.
This approach still needs some work to be applicable to large data and dictionaries.
In the next section, we describe a fast combined denoising algorithm that can
handle real-world images and large-scale dictionaries.
8.3.2 Combined Denoising Algorithm
The starting point of this approach is the sequence of multiresolution supports
(
M k ) 1 k K (see Section 6.2.3.1) that are obtained from the significant coefficients
in T k y after hard thresholding, as in equation (8.6). Now, we seek a solution x that
has the sparsest representation simultaneously in each subdictionary
k such that its
coefficients T k x approach the empirical coefficients T k y , but only at the locations re-
tained in
M k . Put formally, our goal is to solve the following optimization problem:
T 1 x
1 ,...,
T K x
1 ]
,
(8.8)
min
x
[
∈C
where
C
is the set of linear constraints
= x
N ( T k ( y
x )) [ i ]
k .
(8.9)
) N
K
k = 1
C =
, +∞
C
, C
∈ R
,
∈M
[0
(
k )
e k [ i ]
i
k
The positivity constraint can be replaced with other linear constraints on the
dynamic range of the solution. In practice, the constraint size e k [ i ] is typically set
to e k [ i ]
k [ i ] is the standard deviation of the noise in the k th
transform domain at the coefficient index i . See Section 6.2.1.1 on how to compute
them from
= σ
k [ i ]
/
2, where
σ
σ ε . In brief, these constraints guarantee that the solution will preserve
any pattern that is detected as significant by any of the K transforms.
Search WWH ::




Custom Search