Database Reference
In-Depth Information
14.6 Creating Fuzzy Decision Tree
There are several algorithms for induction of decision trees. In this section,
we will focus on the algorithm proposed by Yuan and Shaw (1995). This
algorithm can handle the classification problems with both fuzzy attributes
and fuzzy classes represented in linguistic fuzzy terms. It can also handle
other situations in a uniform way where numerical values can be fuzzified
to fuzzy terms and crisp categories can be treated as a special case of fuzzy
terms with zero fuzziness. The algorithm uses classification ambiguity as
fuzzy entropy. The classification ambiguity, which directly measures the
quality of classification rules at the decision node, can be calculated under
fuzzy partitioning and multiple fuzzy classes.
The fuzzy decision tree induction consists of the following steps:
Fuzzifying numeric attributes in the training set.
Inducing a fuzzy decision tree.
Simplifying the decision tree.
Applying fuzzy rules for classification.
14.6.1
Fuzzifying Numeric Attributes
When a certain attribute is numerical, it needs to be fuzzified into linguistic
terms before it can be used in the algorithm. The fuzzification process can
be performed manually by experts or can be derived automatically using
some sort of clustering algorithm. Clustering groups the data instances
into subsets in such a manner that similar instances are grouped together;
different instances belong to different groups. The instances are thereby
organized into an ecient representation that characterizes the population
being sampled.
Yuan and Shaw (1995) suggest a simple algorithm to generate a set
of membership functions on numerical data. Assume attribute a i has
numerical value x from the domain X .Wecancluster X to k linguistic
terms v i,j ,j =1 ,...,k .Thesizeof k is manually predefined. For the first
linguistic term v i, 1 , the following membership function is used:
1
x ≤ m 1
m 2
x
µ v i, 1 ( x )=
.
(14.5)
m 1 <x<m 2
m 2
m 1
0
x
m 2
Search WWH ::




Custom Search