Database Reference
In-Depth Information
prediction for the unknown instance. The composite bagged classifier, I ,
returns the class that has been predicted most often (voting method). The
result is that bagging produces a combined model that often performs better
than the single model built from the original single data. Breiman (1996)
notes that this is true especially for unstable inducers because bagging
can eliminate their instability. In this context, an inducer is considered
unstable if perturbing the learning set can cause significant changes in the
constructed classifier.
Bagging, like boosting, is a technique for improving the accuracy of a
classifier by producing different classifiers and combining multiple models.
They both use a kind of voting for classification in order to combine the
outputs of the different classifiers of the same type. In boosting, unlike
bagging, each classifier is influenced by the performance of those built before
with the new classifier trying to pay more attention to errors that were made
in the previous ones and to their performances. In bagging, each instance
is chosen with equal probability, while in boosting, instances are chosen
with a probability proportional to their weight. Furthermore, according to
Quinlan (1996), as mentioned above, bagging requires that the learning
system should not be stable, where boosting does not preclude the use
of unstable learning systems, provided that their error rate can be kept
below 0.5.
9.4.2.2 Wagging
Wagging is a variant of bagging [ Bauer and Kohavi (1999) ] in which
each classifier is trained on the entire training set, but each instance is
stochastically assigned a weight. Figure 9.12 presents the pseudo-code of
the wagging algorithm.
Fig. 9.12 The wagging algorithm.
Search WWH ::




Custom Search