Database Reference
In-Depth Information
specifically for neural networks. The works of [ Breiman (2001); Rokach and
Maimon (2005b) ] were developed specifically for decision trees.
Other implementations are considered to be the inducer-independent
type. These implementations can be performed on any given inducer and
are not limited to a specific inducer like the inducer-dependent.
9.8 Multistrategy Ensemble Learning
Multistrategy ensemble learning combines several ensemble strategies. It
has been shown that this hybrid approach increases the diversity of
ensemble members.
MultiBoosting, an extension to AdaBoost expressed by adding wagging-
like features [ Webb (2000) ] , can harness both AdaBoost's high bias and
variance reduction with wagging's superior variance reduction. Using C4.5
as the base learning algorithm, MultiBoosting, significantly more often
than the reverse, produces decision committees with lower error than either
AdaBoost or wagging. It also offers the further advantage over AdaBoost
of suiting parallel execution. MultiBoosting has been further extended by
adding the stochastic attribute selection committee learning strategy to
boosting and wagging [ Webb and Zheng (2004) ] . The latter's research has
shown that combining ensemble strategies would increase diversity at the
cost of a small increase in individual test error resulting in a trade-off that
reduced overall ensemble test error.
Another multistrategy method suggests to create the ensemble by
decomposing the original classification problem into several smaller and
more manageable sub-problems. This multistrategy uses an elementary
decomposition framework that consists of five different elementary decom-
positions: Concept Aggregation, Function, Sample, Space and Feature Set.
The concept of elementary decomposition can be used to obtain a compli-
cated decomposition by using the elementary decomposition concept recur-
sively. Given a certain problem, the procedure selects the most appropriate
elementary decomposition (if any) to that problem. A suitable decomposer
then decomposes the problem and provides a set of sub-problems. A similar
procedure is performed on each sub-problem until no beneficial decomposi-
tion is anticipated. The selection of the best elementary decomposition for
a given problem is performed by using a meta-learning approach.
9.9 Which Ensemble Method Should be Used?
Recent research has experimentally evaluated bagging and seven other
randomization-based approaches for creating an ensemble of decision tree
Search WWH ::




Custom Search