Digital Signal Processing Reference
In-Depth Information
N
N
where,
φ
:
R
→ F ⊂
R
is an implicit mapping projecting the vector
x
into a
higher dimensional space,
F
. Some commonly used kernels include polynomial
kernels
d
κ
(
x
,
y
)=
(
x
,
y
+
c
)
and Gaussian kernels
exp
2
−
x
−
y
κ
(
x
,
y
)=
,
c
where
c
and
d
are the parameters.
By substituting the mapped features to the formulation of sparse representation,
we arrive at kernel sparse representation
α
α
1
)
α
,
α
=
ˆ
arg min
subject to
φ
(
y
)=
φ
(
B
(5.14)
where with the abuse of notation we denote
In other
words, kernel sparse representation seeks the sparse representation for a mapped
feature under the mapped dictionary in the high dimensional feature space. Problem
(
5.14
) can be rewritten as
φ
(
B
)=[
φ
(
b
1
)
,···,
φ
(
b
L
)]
.
)
α
2
+
λ
α
1
,
ˆ
α
=
min
α
φ
(
y
)
−
φ
(
B
(5.15)
where
corresponds to sparser solution. The objective
function in (
5.15
) can be simplified as
λ
is a parameter and larger
λ
)
α
2
+
λ
α
min
α
φ
(
y
)
−
φ
(
B
T
T
=
κ
(
,
)+
α
K
BB
α
−
K
B
+
λ
α
y
y
2
α
1
=
g
(
α
)+
λ
α
,
1
where
T
T
(
α
)=
κ
(
,
)+
α
K
α
−
K
,
g
y
y
2
α
BB
B
L
×
L
is a matrix with
K
BB
∈
R
K
BB
(
i
,
j
)=
κ
(
b
i
,
b
j
)
and
L
×
1
T
K
∈
R
=[
κ
(
b
i
,
y
)
,···,
κ
(
b
L
,
y
)]
.
B
The objective is the same as that of sparse coding except for the definition of
K
BB
and
B
. Hence, the standard numerical tools for solving linear sparse representation
problem can be used to solve the above non-linear sparse coding problem [63].
K
Search WWH ::
Custom Search