Information Technology Reference
In-Depth Information
the product method can be considered as the best approach when the classifiers have
correlation in their outputs. Also it is proved that in the case of outliers, the rank
methods are the best choice [4]. For a more detailed study of combining classifiers,
the reader is referred to [7].
Applications of combinational classifiers to improve the performance of
classification have had significant interest in image processing recently. Sameer Singh
and Maneesha Singh [8] have proposed a new knowledge-based predictive approach
based on estimating the Mahalanobis distance between the test sample and the
corresponding probability distribution function from training data that selectively
triggers classifiers. They also have shown the superior performance of their method over
the traditional challenging methods empirically.
This paper aims at producing an ensemble-based classification of face recognition
by use of gabor features with different frequencies. The face images are first gave to
the gabor feature extractor with different frequencies, then the features of all trainset
union with the test data are compared with each other in each frequency. This results
in a similarity vector per each frequency. The similarity vectors are finally combined
to vote to which image the test data is belonged.
2 Voting Classifier Ensemble
Classifier ensemble works well in classification because different classifiers with the
different characteristics and methodologies can complement each other and cover their
internal weaknesses. If a number of different classifiers votes as an ensemble, the
overall error rate will decrease significantly rather using each of them individually.
One of the oldest and the most common policy in classifier ensembles is majority
voting. In this approach as it is obvious, each classifier of the ensemble is tested for
an input instance and the output of each classifier is considered as its vote. The class
is the winner which the most of the classifiers vote for it. The correct class is the one
most which is often chosen by different classifiers. If all the classifiers indicate
different classes, then the one with the highest overall outputs is selected to be the
correct class.
Let us assume that E is the ensemble of n classifiers { e 1 , e 2 , e 3 …e n }. Also assume
that there are m classes in the case. Next, assume applying the ensemble over data
sample d results in a binary D matrix like equation 1.
d
d
.
d
1
1
1
2
1
n
.
.
.
.
D
=
(1)
d
d
.
d
m
1
1
m
1
2
m
1
n
d
d
.
d
m
1
m
2
m
n
where d i,j is equal to one if the classifier j votes that data sample belongs to class i .
Otherwise it is equal to zero. Now the ensemble decides the data sample to belong
class b according to equation 2.
Search WWH ::




Custom Search