Database Reference
In-Depth Information
The issues of classifier dependency and diversity are very closely linked.
More specifically, it can be argued that any effective method for generating
diversity results in dependent classifiers (otherwise obtaining diversity is
just luck). Nevertheless, as we will explain later one can independently
create the classifiers and then, as a post-processing step, select the most
diverse classifiers. Naturally there might be other properties which can be
used to differentiate an ensemble scheme. We begin by surveying various
combination methods. Following that we discuss and describe each one of
the above-mentioned properties in details.
9.3 Combination Methods
There are two main methods for combining classifiers: weighting methods
and meta-learning. The weighting methods are best suited for problems
where the individual classifiers perform the same task and have comparable
success or when we would like to avoid problems associated with added
learning (such as overfitting or long training time).
9.3.1
Weighting Methods
When combining classifiers with weights, a classifier's classification has a
strength proportional to its assigned weight. The assigned weight can be
fixed or dynamically determined for the specific instance to be classified.
9.3.1.1 Majority Voting
In this combining scheme, a classification of an unlabeled instance is
performed according to the class that obtains the highest number of votes
(the most frequent vote). This method is also known as the plurality vote
(PV) or the basic ensemble method (BEM). This approach has frequently
been used as a combining method for comparing newly proposed methods.
Mathematically, it can be written as:
g ( y k ( x ) ,c i ) ,
class ( x ) = arg max
c i ∈dom ( y )
(9.1)
k
where y k ( x ) is the classification of the k 'th classifier and g ( y, c )isan
indicator function defined as:
g ( y, c )= 1
y = c
= c .
(9.2)
y
0
Search WWH ::




Custom Search