Database Reference
In-Depth Information
A general framework that searches for helpful feature set partitioning
structures has also been proposed [ Rokach and Maimon (2005b) ] .This
framework nests many algorithms, two of which are tested empirically over a
set of benchmark datasets. The first algorithm performs a serial search while
using a new Vapnik-Chervonenkis dimension bound for multiple oblivious
trees as an evaluating scheme. The second algorithm performs a multi-
search while using a wrapper evaluating scheme. This work indicates that
feature set decomposition can increase the accuracy of decision trees.
9.5.5
Multi-Inducers
In Multi-Inducer strategy, diversity is obtained by using different types of
inducers [ Michalski and Tecuci (1994) ] . Each inducer contains an explicit or
implicit bias [ Mitchell (1980) ] that leads it to prefer certain generalizations
over others. Ideally, this multi-inducer strategy would always perform as
well as the best of its ingredients. Even more ambitiously, there is hope that
this combination of paradigms might produce synergistic effects, leading to
levels of accuracy that neither atomic approach by itself would be able to
achieve.
Most research in this area has been concerned with combining empirical
approaches with analytical methods (see for instance [Towell and Shavlik
(1994)]). Woods et al . (1997) combine four types of base inducers (decision
trees, neural networks, k -nearest neighbor, and quadratic Bayes). They
then estimate local accuracy in the feature space to choose the appropriate
classifier for a given new unlabled instance. Wang et al . (2004) examined the
usefulness of adding decision trees to an ensemble of neural networks. The
researchers concluded that adding a few decision trees (but not too many)
usually improved the performance. Langdon et al . (2002) proposed using
Genetic Programming to find an appropriate rule for combining decision
trees with neural networks.
The model class selection (MCS) system fits different classifiers to
different sub-spaces of the instance space, by employing one of three
classification methods (a decision-tree, a discriminant function or an
instance-based method). In order to select the classification method, MCS
uses the characteristics of the underlined training-set, and a collection of
expert rules. Brodley's expert-rules were based on empirical comparisons
of the methods' performance (i.e. on prior knowledge).
The NeC4.5 algorithm, which integrates decision tree with neural
networks [ Zhou and Jiang (2004) ] , first trains a neural network ensemble.
Search WWH ::




Custom Search