Digital Signal Processing Reference
In-Depth Information
functions in the dictionaries. Greedy algorithms estimate sparse representations in
two steps. The first step consists of the selection of basis functions with nonzero rep-
resentation coefficients and the second step is to estimate these nonzero coefficients.
However, it is impossible to select all the expected basis functions at the same time.
The greedy algorithms do the selection in an iterative manner.
Let Γ be the set of basis functions corresponding to nonzero representation coef-
ficients. It is initialized as an empty set Γ 0
.Inthe k th iteration, a single basis
function in the complement of Γ k is added into Γ k . In this way, Γ k can be updated
in each iteration. The difference between the algorithms using greedy strategies is
in how they select a new basis function to update Γ k .
={
φ
}
(A) Matching pursuit
The matching pursuit (MP) algorithm proposed in [ 16 ] sequentially selects the
basis functions by involving the computation of inner products between the sig-
nal and basis functions. In each iteration, matching pursuit calculates a new signal
approximation. The approximation error is then used in the next iteration to deter-
mine which new basis functions are to be selected. Let y be the input signal vector,
D
be the normalized dictionary matrix, and d i be a basis vector
of D . Then, the MP algorithm can be summarized as:
Initialize:
=[
d 1 , d 2 ,..., d M ]
y 0
0 , r 0
y 0
0
r k
2
=
=
={
φ
}
=[]
,k
=
1. If
ξ , repeat
r k 1 , d i
- compute α i =
,
- find i max =
arg i max α i ,
- update:
1. Γ k
Γ k , d i max ]
=[
;
2. ω k
ω k 1 i max ]
T ;
=[
y k
y k 1
3.
ˆ
= ˆ
+
α i max d i max ;
4. r k
r k 1
=
α i max d i max ;
5. k
1.
The proof of convergence of the MP algorithm relies essentially on the fact that
=
k
+
r k , d i max =
0. For the MP algorithm, there is no need to compute any inverse ma-
trix, so it is very simple to be implemented. However, it has also the shortcoming
that although asymptotic convergence is guaranteed, the resulting approximation
after any finite number of iterations will in general be suboptimal and the approxi-
mation error may still be quite large, especially for a nonorthogonal dictionary.
(B) Orthogonal least squares
Another commonly used greedy algorithm is the orthogonal least squares (OLS)
algorithm [ 3 ]. In the implementation of OLS, it selects a new basis function that will
lead to the minimum residual error after orthogonalization. The selection procedure
of OLS can be written as:
Initialize: r 0
y 0
r k
2
=
=[] ,k =
0. If
ξ, repeat
Γ i i ) y
2
where Γ i = Γ k 1
- find i max =
arg i min
y
∪{
d i }
for all d i
Γ k 1 ,
- update
1. Γ k
Γ k 1 , d i max ]
=[
;
Search WWH ::




Custom Search