Information Technology Reference
In-Depth Information
(see [14]) which will minimize the number of edges in the resulting tree and the
same make the time-complexity of the algorithm lower.
We have two types of nodes: a leaf node and a non-leaf node. At a non-leaf
node, the set of rules is partitioned along the branches and each child node gets
its corresponding subset of rules. Every path to the decision attribute node, one
level above the leaf node, represents a subset of the extracted classification rules
when the stable attributers have the same value. Each leaf node represents a set
of rules, which do not contradict on stable attributes and also define decision
value d i . The path from the root to that leaf gives the description of objects
supported by these rules.
Instead of splitting the set of rules R by stable attributes and next by the
decision attribute, we can also start the partitioning algorithm from a decision
attribute. For instance, if a decision attribute has 3 values, we get 3 initial sub-
trees. In the next step of the algorithm, we start splitting these sub-trees by
stable attributes following the same strategy as the one presented for E-action
trees. This new algorithm is called action-forest algorithm.
Now, let us take Table 9.1 as an example of a decision system S . Attributes
a , c are stable and b , d flexible. Assume now that our plan is to re-classify some
objects from the class d 1 (
)intotheclass d 1 (
{
d i }
{
d j }
). In our example, we
also assume that d i =( d, L )and d j =( d, H ).
First, we represent the set R of certain rules extracted from S as a table (see
Table 9.2). The first column of this table shows objects in S supporting the rules
from R (each row represents a rule). For instance, the second row represents the
rule [[( a, 2)
( d, L )]. The construction of an action tree starts with
the set R as a table (see Table 9.2) representing the root of the tree ( T 1 in
Fig. 9.1). The root node selection is based on a stable attribute with the smallest
number of values among all stable attributes. The same strategy is used for a
child node selection. After labelling the nodes of the tree by all stable attributes,
the tree is split based on the value of the decision attribute. Referring back to the
example in Table 9.1, we use stable attribute a to split that table into two sub-
tables defined by values
( c, 1)]
{
1 , 2
}
of attribute a . The domain of attribute a is
{
1 , 2
}
and the domain of attribute c is
. Clearly, card [ V a ]islessthan card [ V c ]
so we partition the table into two: one table with rules containing a =0and
another with rules containing a = 2. Corresponding edges are labelled by values
of attribute a . All rules in the sub-table T 2 have the same decision value. So, we
{
0 , 1 , 2
}
Table 9.2. Set of rules R with supporting objects
Objects
a
b
c
d
{x 3 ,x 4 ,x 11 ,x 12 }
1
H
{x 1 ,x 2 ,x 7 ,x 8 }
2
1
L
{x 7 ,x 8 ,x 9 }
2
0
L
{x 3 ,x 4 }
10 H
{x 5 ,x 6 }
32 H
 
Search WWH ::




Custom Search