Database Reference
In-Depth Information
9.3.1.5 Dempster-Shafer
The idea of using the Dempster-Shafer theory of evidence [ Buchanan and
Shortliffe (1984) ] for combining classifiers has been suggested in [ Shilen
(1990) ] . This method uses the notion of basic probability assignment defined
for a certain class c i given the instance x :
1
x ) .
P M k ( y = c i |
bpa ( c i ,x )=1
(9.7)
k
Consequently, the selected class is the one that maximizes the value of the
belief function:
bpa ( c i ,x )
Bel ( c i ,x )= 1
A ·
bpa ( c i ,x ) ,
(9.8)
1
where A is a normalization factor defined as:
A =
∀c i ∈dom ( y )
bpa ( c i ,x )
bpa ( c i ,x ) +1 .
(9.9)
1
9.3.1.6 Vogging
The idea behind the vogging approach (Variance Optimized Bagging) is to
optimize a linear combination of base-classifiers so as to aggressively reduce
variance while attempting to preserve a prescribed accuracy [ Derbeko et al .
(2002) ] .Forthispurpose,Derbeko et al . implemented the Markowitz Mean-
Variance Portfolio Theory that is used for generating low variance portfolios
of financial assets.
9.3.1.7 Naıve Bayes
Using Bayes' rule, one can extend the na ıve Bayes idea for combining
various classifiers:
P M k ( y = c j |
x )
P ( y = c j )
Class ( x ) = argmax
c j ∈dom ( y )
P ( y = c j ) > 0
·
.
(9.10)
P ( y = c j )
k =1
9.3.1.8 Entropy Weighting
The idea in this combining method is to give each classifier a weight that
is inversely proportional to the entropy of its classification vector.
Class ( x ) = argmax
c i ∈dom ( y )
E ( M k ,x ) ,
(9.11)
P M k ( y = c j |x )
k : c i =argmax
c j dom ( y )
Search WWH ::




Custom Search