Information Technology Reference
In-Depth Information
For performance comparison, we record the testing accuracy and computing
time in CPU times(seconds). The experiment environment is Matlab 7(R2010b)
on a personal computer equipped with a i5-2400 CPU running at a clockspeed
of 3.1 GHz and 16 GByte of main memory. It can be seen that KSIR is often
faster then 2T1S and LDS, especially for large data sets and multi-class problems.
For multi-class problems, we used 'one-vs.-one' scheme in our benchmark, KISR,
2T1S to decompose the problem into a series of binary classification subprograms
and combine them with a voting scheme. The KSIR process is one-time-only after
which we build a series of C 2 binary classifiers in a ( J
1)-dimensional KSIR
extracted subspace. The ecient testing time is due to the fact that KSIR-
based algorithms are acting on very low-dimensional KSIR variates. And the
complexity of linear SSVM is O ( n +1),where n is number of input data (here
the dimensionality of input data is J
1).
Figure 3 shows that comparison result of average testing accuracy with the
various SSL algorithms with the pure supervised learning scheme.The test ac-
curacy of the classifiers build by semi-supervised KSIR with different portion
of the labeled points available for training (also with the unlabeled part) are
higher than the test accuracy of pure supervised learning classifier using only
the labeled data points. The result show the proposed SS KSIR takes advantage
of the information contained within the unlabeled data. Figure 4 show the av-
erage computing time with PURE SSVM, 2T1S, and LDS methods. Although
our method the accuracy is comparable to other SSL, but with the advantage
that our method only requires a single training phase without needing to retrain
iteratively.
5Con lu on
In this paper, we have proposed a semi-supervised algorithms for nonlinear di-
mensionality reduction. It exploit both labeled and unlabeled data points, and
generate the e.d.r. subspace by not only minimizing mapping the with-in class
variance but also maximizing the between-class variance. Computationally, KSIR
is a standard eigenvalue decomposition problem on a large but dense covariance
matrix, which can be eciently computed.In KSIR-based approach, it only in-
volves solving the KSIR problem once and a series of C 2 many linear binary
SVMs in a ( J
1)-dimensional space. Our method can preserve the intrinsic
structure of the data set as shown in the result. Experimental results on single
training time for the semi-supervised classificaiton problem retrieval demonstrate
the effectiveness of our algorithm. Although the accuracy of our method cannot
be as high as another methods, but it save more than 10 times of computing
time. This is because our approach only needs a single training pass to compute
the directions unlike traditional SSL/transductive learning approaches which
usually require many iterations
 
Search WWH ::




Custom Search