Digital Signal Processing Reference
In-Depth Information
Fig. 6.2 Information-theoretic dictionary update with global atoms shared over classes. For a
better visual representation, sparsity 2 is chosen and a randomly selected subset of all samples
are shown. The recognition rate associated with (a), (b), and (c) are: 30
.
63%, 42
.
34% and 51
.
35%.
The recognition rate associated with (d), (e), and (f) are: 73
75%. Note that
the ITDU effectively enhances the discriminability of the set of common atoms [114]
.
54%, 84
.
45% and 87
.
To illustrate how the discriminability of dictionary atoms selected by the
information theoretic dictionary section (ITDS) method can be further enhanced
using the information theoretic dictionary update (ITDU) method, consider Fig. 6.2 .
The Extended YaleB face dataset [64] and the USPS handwritten digits dataset [1]
are used for illustration. Sparsity 2 is adopted for visualization, as the non-zero
sparse coefficients of each image can now be plotted as a 2-D point. In Fig. 6.2 ,
with a common set of atoms shared over all classes, sparse coefficients of all samples
become points in the same 2-D coordinate space. Different classes are represented
by different colors. The original images are also shown and placed at the coordinates
defined by their non-zero sparse coefficients. The atoms to be updated in Fig. 6.2 (a)
and 6.2 (d) are selected using ITDS. One can see from Fig. 6.2 that the ITDU
method makes sparse coefficients of different classes more discriminative, leading
to significantly improved classification accuracy [114].
6.3
Non-Linear Kernel Dictionary Learning
Similar to finding non-linear sparse representation in the high dimensional feature
space, one can also lean non-linear dictionaries using the kernel methods. Let
Φ
:
N
N
N
R
→ F ⊂ R
be a non-linear mapping from
R
into a dot product space
F
.One
can learn a non-linear dictionary B in the feature space
F
by solving the following
optimization problem:
Search WWH ::




Custom Search