Databases Reference
In-Depth Information
To find the confidence of ( r 1 ,r 2 )-e-action rule in S , we divide the number
of objects supporting ( r 1 ,r 2 )-action rule in S by the number of objects sup-
porting the left hand side of ( r 1 ,r 2 )-e-action rule times the confidence of the
second classification rule r 2 in S .
4 Discovering E-Action Rules
In this section we present a new algorithm, Action-Tree algorithm, for discov-
ering e-action rules. Basically, we partition the set of rules discovered from an
information system S =( U,A St
), where A St is the set of stable
attributes, A Fl is the set of flexible attributes and, V d = {d 1 ,d 2 ,...,d k } is
the set of decision values, into subsets of rules supporting the same values of
stable attributes and the same decision value.
Action-tree algorithm for extracting e-action rules from decision system S
is as follows:
A Fl ∪{
d
}
i. Build Action-Tree
a. Divide the rule table, R , taking into consideration all stable attributes
1. Find the domain Dom ( v St
i
) of each stable attribute v St
i
from the
initial table.
2. Assuming that the number of values in Dom ( v S i ) is the smallest,
partition the current table into sub-tables each of which contains
only rules supporting values of stable attributes in the correspond-
ing sub-table.
3. Determine if a new table contains minimum two different deci-
sion values and minimum two different values for each flexible
attribute. If it does, go to Step 2, otherwise there is no need to
split the table further and we place a mark.
b. Divide each lowest level sub-table into new sub-tables each of which
contains rules having the same decision value.
c. Represent each leaf as a set of rules which do not contradict on stable
attributes and also define decision value d i . The path from the root to
that leaf gives the description of objects supported by these rules.
ii. Generate e-action rules
a. Form e-action rules by comparing all unmarked leaf nodes of the same
parent.
b. Calculate the support and the confidence of a new-formed rule. If its
support and confidence meet the requirements, print it.
The algorithm starts with all extracted classification rules at the root node
of the tree. A stable attribute is selected to partition theses rules. For each
value of the attribute a branch is created, and the corresponding subset of
rules that have the attribute value specified by the branch is moved to the
newly created child node. Now the process is repeated recursively for each
Search WWH ::




Custom Search