Information Technology Reference
In-Depth Information
and Mixca. The technique applied to select the training and testing samples was
leave-one-out with supervised training. All the records, including repetitions or
replicas, corresponding to a piece to be tested were taken out of the training stage
of that piece.
The first step of the classification stage consisted of determining the capacity of
the features from the data vectors to discern between the different classes of the
four levels of classification (i-material condition, ii-kind of defect, iii-defect ori-
entation, and iv-defect dimension). Thus, a feature selection step was made using
the classification results of LDA. It consisted of trying several performances of
classification using different data subsets from the original 50-dimension data set.
The data subsets were matrices of varying dimension M d,d ¼ 1...50 for each
performance. All the data set vectors were used, but the number of spectral fea-
tures employed in the classification varied for each performance, increasing from 1
to 50. LDA was applied using the linear, quadratic, and Mahalanobis distances;
maximum classification accuracy, was obtained by the quadratic distance.
The feature selection step using the results of LDA classifications with qua-
dratic distance for the experiment data set is shown in Fig. 5.7 . The curves
describe the classification error as a function of the number of features employed.
There is an optimum number of features where the maximum classification
accuracy is obtained. Note that an increase in the number of features progressively
improves the classification results up to a point where adding more features makes
the classification accuracy worse. The optimum number of features varied
depending on the classification level; it was 10, 15, 18, and 25, respectively, for
classification levels i, ii, iii, and iv.
The features corresponding to the maximum classification accuracy obtained by
LDA in the feature selection stage were selected as the input for the classification
with MLP and Mixca. LDA and MLP are well-known methods for classification,
whereas Mixca has only recently been reported [ 20 ]. MLP was applied using a
validation stage, one hidden layer with a tuned number of neurons, and a resilient
backpropagation learning algorithm. Mixca was applied considering the definition
provided in Sect. 5.3.1 of an ICA mixture model for the material quality deter-
mination using impact-echo testing, i.e., the data provided for a class of material
can be modelled by the parameters of one ICA.
Let us highlight some aspects of the Mixca algorithm explained in Chap. 3 . The
observation vectors x k are modelled as the result of applying a linear transfor-
mation A k to a vector s k (sources), whose elements are independent random
variables, plus a bias vector b k ; thus, s k ¼ A 1
k ð x k b k Þ . Let us call A k ¼ W k .
The algorithm is based on maximizing the data likelihood L ð X = W Þ¼
log p ð X = W Þ¼ P n ¼ 1 log p ð x ð n Þ W Þ where W is a compact notation for all the
unknown parameters W k , b k for all the classes C k ¼ð k ¼ 1...K Þ . It is considered
that, in a mixture model, the probability of every available feature vector can be
separated into the contributions that are due to each class.
Search WWH ::




Custom Search