Information Technology Reference
In-Depth Information
standard deviations from the mean to a fixed value of 3 or
3, depending on the sign
of the original value. The remaining preprocessing steps were optionally performed
before each classification. They were done alone or together. One of these prepro-
cessing steps was singular vector decomposition (SVD) of the data. The SVD was
done in order to denoise the data by keeping only some of principal components.
For this purpose the eigenvalues of the matrix of the data were plot (Fig. 5.3) and
only the important eigenvalues were kept. Another popular preprocessing approach
which was used is the removal of noninformative features via subspace-based de-
composition techniques. This approach proceeds by discarding the irrelevant sub-
space based on assumption that the sparse portion of the data space carries little,
or no useful information. One of the approaches used in this study was to reduce
the data dimension via principal component analysis (PCA). The principal compo-
nent analysis is a representative of the unsupervised learning method which yields
a linear projection for mapping the input vector of observations onto a new feature
description which is more suitable for given task. It is a linear orthonormal projec-
tion which allows for the minimal mean square reconstruction error of the training
data [1]. Another approach that was used to reduce the data dimension was linear
discriminant analysis (LDA). The goal of the LDA is to train the linear data projec-
tion such that the class separability criterion is maximized. We further discuss the
effect of these procedures in our “Results” section.
Eigenvalues of matrix of the data
350
300
250
200
150
100
50
0
0
20
40
60
80
100
Fig. 5.3: Eigenvalues of the data matrix.
5.4 Pattern Recognition Methods
In this section, we describe our classification procedures. Five classifiers were used:
Fisher linear discriminant (FLD), a linear support vector machine (SVM), Gaussian
nave Bayes (GNB), correlation analysis, and k -nearest neighbor classifier ( k NN).
These classifiers were selected because they have been used successfully in other
Search WWH ::




Custom Search