Information Technology Reference
In-Depth Information
is the set of decision values, into subsets of rules having the same values of stable
attributes in their classification parts and defining the same value of the deci-
sion attribute. Classification rules can be extracted from S using, for instance,
discovery system LERS [2].
Action-tree algorithm for extracting E-Action rules from decision system S is
as follows:
i. Build Action-Tree
a. Partition the set of classification rules R in a way that two rules are in
the same class if their stable attributes are the same
1. Find the cardinality of the domain V v i for each stable attribute v i in
S .
2. Take v i ,which card ( V v i ) is the smallest, as the splitting attribute
and divide R into subsets each of which contains rules having the
same value of the stable attribute v i .
3. For each subset, obtained in step 2, determine if it contains rules of
different decision values and different values of flexible attributes. If
it does, go to step 2. If it doesn't, there is no need to split the subset
further and we place a mark.
b. Partition each resulting subset into new subsets each of which contains
only rules having the same decision value.
c. Each leaf of the resulting tree represents a set of rules which do not
contradict on stable attributes and also it uniquely defines decision value
d i . The path from the root to that leaf gives the description of objects
supported by these rules.
ii. Generate E-action rules
a. Form E-action rules by comparing all unmarked leaf nodes of the same
parent.
b. Calculate support and confidence of each new-formed E-action rule. If
support and confidence meet the thresholds set up by user, print the
rule.
The algorithm starts at the root node of the tree, called E-action tree, rep-
resenting all classification rules extracted from S . A stable attribute is selected
to partition these rules. For each value of that attribute an outgoing edge from
the root node is created, and the corresponding subset of rules that have the
attribute value assigned to that edge is moved to the newly created child node.
This process is repeated recursively for each child node. When we are done with
stable attributes, the last split is based on a decision attribute for each current
leaf of E-action tree. If at any time all classification rules representing a node
have the same decision value, then we stop constructing that part of the tree. We
still have to explain which stable attributes are chosen to split classification rules
representing a node of E-action tree. The algorithm selects any stable attribute
which has the smallest number of possible values among all the remaining sta-
ble attributes. This step is justified by the need to apply a heuristic strategy
 
Search WWH ::




Custom Search