Digital Signal Processing Reference
In-Depth Information
Approximate solutions of the above problem can obtained by using any sparse
coding algorithms such OMP [109], [138] and BP [38].
•
Dictionary update
: This is where the MOD algorithm differs from the KSVD
algorithm. The MOD algorithm updates all the atoms simultaneously by solving
the following optimization problem
2
F
arg min
B
X
−
B
Γ
(6.1)
whose solution is given by
T
ΓΓ
T
−
1
B
=
X
Γ
.
Even though the MOD algorithm is very effective and usually converges in a
few iterations, it suffers from the high complexity of the matrix inversion step as
discussed in [2].
In the case of KSVD, the dictionary update is performed atom-by-atom in an
efficient way rather than using a matrix inversion. The term to be minimized in
equation (
6.1
) can be rewritten as
X
2
2
=
2
2
,
2
F
−
∑
j
T
j
−
j
=
j
0
b
j
γ
T
j
T
j
0
X
−
B
Γ
=
X
b
j
γ
−
b
j
0
γ
j
represents the
j
th row of
j
0
,the
where
γ
Γ
. Since we want to update
b
j
0
and
γ
first term in the above equation
−
j
=
j
0
b
j
γ
T
j
E
j
0
=
X
j
0
are found by an SVD decomposi-
tion. In particular, while fixing the cardinalities of all representations, a subset of
the columns of
E
j
0
are taken into consideration. This way of updating leads to a
substantial speedup in the convergence of the training algorithm compared to the
MOD method.
can be precomputed. The optimal
b
j
0
and
γ
Both the MOD and the KSVD dictionary learning algorithms are described in detail
in Algorithm
8
.
6.2
Discriminative Dictionary Learning
Dictionaries can be trained for both reconstruction and discrimination applications.
In the late nineties, Etemand and Chellappa proposed a linear discriminant analysis
(LDA) based basis selection and feature extraction algorithm for classification using
Search WWH ::
Custom Search