Databases Reference
In-Depth Information
These factors represent three key advantages that machine learning
ensembles hold; they also represent three of the major limitations often
recognized in specific machine algorithms. It is for this reason that
ensembles have the potential to deliver better performance in terms of
detection accuracy than many individual machine learning algorithms.
Ensemble of Classifiers which comes under Decision committee learning
has demonstrated spectacular success in reducing classification error from
learned classifiers. These techniques develop a classifier in the form of a
committee of subsidiary classifiers. The committee members are applied to
a classification task and their individual outputs combined to create a single
classification from the committee as a whole. This combination of outputs
is often performed by majority vote. Three decision committee learning
approaches, AdaBoost, Multi Boosting and Bagging have received extensive
attention. They are recent methods for improving the predictive power of
classifier learning systems. Some classification methods are unstable in the
sense that small perturbations in their training sets may result in large
changes in the changes in the constructed classifier. Breiman 31 proved
that decision tress with neural networks are unstable during classification.
Unstable classifications can have their accuracy improved by perturbing and
combining, i.e., generating a series of classifiers by perturbing the training
set, and then combining these classifiers to predict together. Boosting is
one of the ecient perturbing and combining methods. Though a number of
variants of boosting are available, we use the most popular form of boosting,
known as AdaBoost (Adaptive Boosting) for our experimentation. Multi-
Boosting is an extension to the highly successful AdaBoost technique for
forming decision committees. MultiBoosting can be viewed as combining
AdaBoost with Wagging. It is able to harness both AdaBoost's high
bias and variance reduction with Wagging's superior variance reduction.
Bagging (Bootstrapped Aggregating) on the other hand, this combined
voting with a method for generating the classifiers that provide the votes.
The simple idea was based on allowing each base classifier to be trained
with a different random subset of the patterns with the goal of bringing
about diversity in the base classifiers. Databases can have nominal, numeric
or mixed attributes and classes. Not all classification algorithms perform
well for different types of attributes, classes and for databases of different
sizes. In order to design a generic classification tool, one should consider
the behaviour of various existing classification algorithms on different
datasets.
Search WWH ::




Custom Search