Information Technology Reference
In-Depth Information
is proposed to describe key features by few nonlinear components [12]. Let ˆ
be a nonlinear mapping and the kernel function K ( x , z )= ˆ ( x ) ˆ ( z ). KSIR
finds the dimension reduction directions by solving the following generalized
eigenvalue problem with the kernel method:
ʣ E ( K| y ) ʲ = ʻʣ K ʲ
(4)
where K = K ( A, A )
n×n , ʣ K is the sample covariance matrix of K ,and
ʣ E ( K|Y J ) denotes the between-slice sample covariance matrix based on the ker-
nelized slice means given by
R
J
ʣ E ( K|Y J ) = 1
n
K ) ,
n j ( K j
K )( K j
(5)
j =1
n i =1 K ( A , x i )and K j =
n j
where K = 1
i =1 K ( A , x i ) is the kernelized slice
1
n j
mean of the j th slice.
Although full kernel matrix computation is time- and memory-consuming
and the effective rank of the covariance matrix of kernel data is quite low in the
real world applications, these leads to numerical instability and poor estimates
of the e.d.r. directions. An appropriate solution is to find a reduced-column
approximation to K , denoted by K which provides a good approximation, as
demonstrated by the reduced support vector machine (RSVM) [7]. The main
characteristic is the reduction of the full and dense kernel matrix K from n × n
to n × n ,where n is the size of randomly selected subset of kernel matrix. This
reduced matrix is much smaller thus the optimization problem can be solved
faster. The reduced kernel technique can also be applied to KSIR application.
Let A
n×d be the reduced set, and the reduced KSIR can be formulated as
R
follows:
ʣ E ( K|Y J ) ʲ = ʻʣ K ʲ,
(6)
where ʲ
n and K = K ( A, A ), n
n .
Since the rank of the between-slice covariance of kernel data, E ( K
R
|
y )inEqua-
tion (5), is ( J
1), we do not need to solve the whole eigenvalue decomposition
problem for this n × n matrix. Instead, the reduced singular value decomposi-
tion (SVD) technique is used to solve for the leading ( J − 1) components to save
computing time.
3 Semi-supervised KSIR for Dimension Reduction
In the supervised dimension reduction via KSIR, the e.d.r. subspace is generated
by solving a generalized eigenvalue problem.BasedonthispropertyofKSIR,
we can apply the KSIR from supervised dimension reduction to semi-supervised
(SS) dimension reduction approach [9]. In a SS problem, given a small portion
of labeled data instances and abundant unlabeled data instances, we try to find
a decision function for predicting the labels of new data instances. We assume
 
Search WWH ::




Custom Search