Database Reference
In-Depth Information
In an approach, which Cohen et al . (2007) term decision tree ISD,
the partition of the instance-space is attained by a decision tree. Along
with the decision tree, the ISD method employs another classification
method, which classifies the tree's leaves (the tree's leaves represent the
different sub-spaces). Namely, decision tree ISD methods produce decision
tree s , in which the leaves are assigned classifiers rather than simple class
labels. When a non-decision tree method produces the leaves' classifiers, the
composite classifier is sometimes termed a decision tree hybrid classifier.
The term “decision tree hybrid classifier”, however, is also used in a
broader context, such as in cases where a sub-classification method decides
about the growth of the tree and its pruning [ Sakar and Mammone (1993) ] .
There are two basic techniques for implementing decision tree ISD.
The first technique is to use some decision tree method to create the
tree and then, in a post-growing phase, to attach classifiers to the tree's
leaves. The second technique is to consider the classifiers as part of the
tree-growing procedure. Potentially, the latter technique can achieve more
accurate composite classifiers. On the other hand, it usually requires more
computationally intensive procedures.
Carvalho and Freitas. (2004) proposed a hybrid decision tree genetic-
algorithm GA classifier, which grows a decision tree and assigns some of
the leaves with class labels and the others with GA classifiers. The leaves
with the classifiers are those that have a small number of corresponding
instances. A previously unseen instance is subsequently either directly
assigned with a class label or is sub-classified by a GA classifier (depending
on the leaf to which the instance is sorted). Zhou and Chen (2002)
suggested a method, called hybrid decision tree (HDT). HDT uses the
binary information gain ratio criterion to grow a binary decision tree in
an instance-space that is defined by the nominal explaining-attributes only.
A feed-forward neural network, subsequently classifies the leaves, whose
diversity exceeds a pre-defined threshold. The network only uses the ordinal
explaining-attributes.
In this chapter, we focus on the second decision tree ISD technique,
which considers the classifiers as part of the decision tree's growth. NBTree
is a method which produces a decision tree naive-Bayes hybrid classifier
[ Kohavi (1996) ] . In order to decide when to stop the recursive partition
of the instance-space (i.e. stop growing the tree), NBTree compares
two alternatives: partitioning the instance-space further on (i.e. continue
splitting the tree) versus stopping the partition and producing a single naive
Bayes classifier. The two alternatives are compared in terms of their error
Search WWH ::




Custom Search