Databases Reference
In-Depth Information
Tabl e 2 . Partitioned data set from Table 1
Outlook
Humidity
Windy
Decision
sunny
MEDIUM
TRUE
Play
sunny
LOW
FALSE
Play
sunny
MEDIUM
FALSE
Play
overcast
LOW
FALSE
Play
sunny
MEDIUM
TRUE
Play
sunny
LOW
TRUE
Play
overcast
LOW
TRUE
Play
sunny
MEDIUM
FALSE
Play
sunny
LOW
TRUE
Play
overcast
HIGH
TRUE
Don't play
rainy
HIGH
FALSE
Don't play
overcast
MEDIUM
FALSE
Don't play
overcast
HIGH
FALSE
Don't play
rainy
MEDIUM
FALSE
Don't play
rainy
HIGH
TRUE
Θ C
overcast
HIGH
TRUE
Θ C
sunny
HIGH
TRUE
Θ C
The data sets, after being partitioned based on the class label, are given
in Table 2. To retain simplicity of this example, we set the support value at
0 . 25 which is greater than the values that were used in our simulations in
Sect. 3. Table 3 shows the rules generated. For this simple example, at this
stage, the number of rules generated were 10, 10 and 15 rules corresponding
to the classes 'Play', 'Don't Play' and Θ C , respectively.
The pruning process (see Sect. 2.5) produces a reduced rule set. This
final rule set is shown in Table 4. The support value is not indicated in
this table since it does not play a critical role in the classification stage.
As can be seen from Table 4, the rules with lower levels of abstraction
appear to have had a better chance of being selected to the pruned rule
set.
Next, this pruned rule set is used in the rule refinement stage (see Sect. 2.7).
The rule set generated at the conclusion of the rule refinement stage is
given in Table 5. Note that the last rule in Table 5 was added into the fi-
nal rule set because its corresponding training instance was incorrectly clas-
sified at the rule refinement stage. When conflicting rules are present in
the final rule set, masses to those rules are assigned based on their confi-
dence values, and then the Dempster's rule of combination (shown in (7))
takes this into account when making the final decision in the classification
stage.
 
Search WWH ::




Custom Search