Database Reference
In-Depth Information
networks with a good error diversity according to the measurement of
diversity.
9.5.1.2 Starting Point in Hypothesis Space
Some inducers can gain diversity by starting the search in the Hypothesis
Space from different points. For example, the simplest way to manipulate
the back-propagation inducer is to assign different initial weights to the
network [ Kolen and Pollack (1991) ] . Experimental study indicates that the
resulting networks differed in the number of cycles in which they took to
converge upon a solution, and in whether they converged at all. While it
is very simple way to gain diversity, it is now generally accepted that it is
not sucient for achieving good diversity [Brown et al . (2005)].
9.5.1.3 Hypothesis Space Traversal
These techniques alter the way the inducer traverses the space, thereby
leading different classifiers to converge to different hypotheses [ Brown et al .
(2005) ] . We differentiate between two techniques for manipulating the space
traversal for gaining diversity: Random and Collective-Performance.
9.5.1.3.1
Random-based Strategy
The idea in this case is to “inject randomness” into the inducers in order
to increase the independence among the ensemble's members. Ali and
Pazzani (1996) propose to change the rule learning HYDRA algorithm
in the following way: Instead of selecting the best literal at each stage
(using, for instance, an information gain measure), the literal is selected
randomly such that its probability of being selected is proportional to its
measured value. A similar idea has been implemented for C4.5 decision trees
[ Dietterich (2000a) ] . Instead of selecting the best attribute in each stage, it
selects randomly (with equal probability) an attribute from the set of the
best 20 attributes. Markov Chain Monte Carlo (MCMC) methods can also
be used for introducing randomness in the induction process [ Neal (1993) ] .
9.5.1.3.2
Collective-Performance-based Strategy
In this case the evaluation function used in the induction of each member
is extended to include a penalty term that encourages diversity. The most
studied penalty method is the Negative Correlation Learning [ Liu (2005);
Brown and Wyatt (2003); Rosen (1996) ] . The idea of negative correlation
Search WWH ::




Custom Search