Information Technology Reference
In-Depth Information
by the neighborhood relationships and labels of samples. Their components are
defined as
w ij = exp(
x i −x j 2
2 t 2
) , x i
NN p ( x j )or x j
NN p ( x i ), x i ,x j
ω k .
(2)
0 ,
otherwise.
b ij = exp(
u i −u j 2
2 t 2
) , u i
NN p ( u j )or u j
NN p ( u i ).
(3)
0 ,
otherwise.
where parameter t is a suitable constant, u i =(1 /n i ) x k ∈ω i x k is the mean
vector of class i in input space, and NN p (
) denotes the p nearest neighbors.
The maximization problem (1) can be converted to solve a generalized eigenvalue
problem as follow:
·
( UHU T ) A = λ ( XLX T ) A
(4)
where H = E
W are Laplacian matrices, E and D are diagonal
matrices with their diagonal elements being the column or row( B and W are
symmetric) sums of B and W , respectively.
B , L = D
2.2 Derivation of KLPDA
Although LPDA is successful in many circumstances, it often fails to deliver good
performance when face images are subject to complex nonlinear changes due to
large poses, expressions, or illumination variations, for it is a linear method in
nature. In this section, we extend LPDA to its kernel formulation which is to yield
a nonlinear locality preserving discriminant subspace by combining the kernel
trick and LPDA. The image samples are primarily projected into an implicit
high-dimensional feature space, in which different classes are supposed to be
linearly separable, by a nonlinear mapping, φ : x
N
F . Then the
LPDA is conducted in the high-dimensional feature space F . Benefit from the
Mercer kernel function, it is unnecessary to compute φ explicitly but compute
the inner product of two vectors in F with an inner product kernel function:
k ( x, y )= ( x ) ( y ) > .
For the same given dataset as in section 2.1, let X φ and U φ be the projections
of X and U in F , y i = A T x i , m i = A T u i be the representations of x i and
u i
f
with linear transform A . Define the weight matrices in kernel space W φ
=
diag ([ w φk
=[ b ij ] i,j =1 in the similar manner with those in
ij ] n k
{
}
k =1 and B φ
i,j =1 )
the input space
ij = exp(
x i
x j 2
2 t 2
) , x i
NN p ( x j )or x j
NN p ( x i ), x i ,x j
ω k .
w φk
0 ,
otherwise.
(5)
b ij = exp(
u i −u j 2
2 t 2
) , u i
NN p ( u j )or u j
NN p ( u i ).
(6)
0 ,
otherwise.
 
Search WWH ::




Custom Search