Digital Signal Processing Reference
In-Depth Information
n
C
(
)
2
(
)
2
()
wSw
T
φ
=
w
T
φ φ
x
m
φ
,
wSw
T
φ
=
N
w m
T
φ
m
φ
,
φ
Ξ
φ
i
φ
B
φ
l
φ
l
2
i
=
1
l
=
1
N
C
(
)
2

()
wSw
T
φ
=
w
T
φ φ
x
m
φ
, where
m μ ,
φ
φ
m μ . Note that
φ
φ
SSS ,
φ
Ξ =+
φ
φ
φ
w
φ
i
l
l
l
b
w
l
==
11
i
Inspired from manifold learning theory, we incorporate the locality preserving into
KHDA. Now, the globally nonlinear problem can be divided into multiple locally
sub-linear problems via the locality similarity matrices, and vice versa, these linear
problems can be combined to approximate original problem. So the LPKHDA can be
expressed as
1
max 2
NN
(
) () ()
(
)
()
()
T
2

()
w
T
φ
x
φ
x
S
x
φ
x
φ
x
w
φ
i
j
ij
i
j
φ
N
w
φ
i
==
j
NN
11
(11)
C
1
(
)
(
)
1
(
)
(
)
(
)
(
)
(
)
T
l
l
2
2

()
()
()
()
()
l
l
l
l
l
λ
w
T
φ
x
φ
x
S
φ
x
φ
x
w
w
φ
i
j
ij
i
j
φ
φ
2
N
2
l
=
1
i
=
1
j
=
1
l
NN
(
)
(
)
C
1
(
)
(
)
(
)
(
)
(
)
T
l
l
2
(
)

()
()
()
()
()
l
l
l
l
l
st
.. 1
λ
w
T
φ
x
φ
x
S
φ
x
φ
x
w
=
1
(12)
φ
i
j
ij
i
j
φ
2
N
l
=
1
i
=
1
j
=
1
l
N
N
()
x
ij ij

l
l
Where
S
=  
 
S
,
S
= 
S
are similarity matrices shown as Ref[15].
()
ij
x
l
,
=
1
ij
,
=
1
Similar to Ref[15], the Eq. (11) can be rewritten as
1
1
1
2
C
T
φ
φ
T
T
φ
φ
T
(13)
max 2
wXS X w
λ
w
XS
X w
w
φ
xx
φ
φ
l
() ()
l
φ
φ
l
l
N
l
=
1
2
N
xx
2
w
φ
l
1
(
)
C
st
.. 1
λ
w
φ
X S
φ
X
φ
T
w
=
1
() ()
T
l
l
l
l
φ
l
=
1
2
N
xx
l
(
)
(
)
2
where
SDS  ,
=−
SS
=
S
,
S
=
DS
S  ,
() ()
() ()
()
()
xx
xx
x
x
x
x
x
l
l
l
l
l
l
ij
,
ij
,
xx
xx
x
x
(
)
(
)
{
}
()
( )
()
()
φ
l
l
1
N
X
=
φ
x
,
,
φ
x
,
X
φ
=
φ
x
,
,
φ
x
,
D
=
diag d
,
,
d
,
1
N
l
1
N
xx
xx
xx
l
(
)
2
()
{
}
2
N
N
N
d
m
xx
=
S
, ,
D
=
diag d
1
,
,
d
,
d
i
=
S
,
l
l
() ()
() ()
() ()
() ()
()
x
l
l
l
l
l
l
l
l
l
n
=
1
j
=
1
mn
xx
xx
xx
xx
x
ij
ij
,
=
, ,
 ,
,
N
mn
,
=
1, 2,
 .Obviously,
,
N
S
is similar to Laplacian matrix of
l
xx
LPP. Noting that
w βΦ , the solution to LPKHDA can be yielded by solving the
following eigenproblem
φ =
T
t
1
C
1
 
C
1
(
)
(
)
λ
KS K
−−
1
λ
K S
K β
=
ν
1
λ
KS
K
+
η
(14)
X x X
()
() ()
()
X
() ()
N
N
l
l
l
l
N
l
l
X
2
xxx x
xx
l
 
l
l
=
1
l
=
1
l
l
(
)
N
(
)
N
()
()
l
()
R ×
NN
l
l
R ×
NN
p β denotes
where
K
=
k
x
,
x
,
K
=
k
x
,
x
.
l
l
X
i
j
()
i
j
l
X
ij
,
=
1
ij
,
=
1
eigenvector corresponding to the
t p largest eigenvalue,
p
=
1, 2 ,
 , the features
,
M
()
()
p
are generated by projecting
φ
x in
w
.
 
Search WWH ::




Custom Search