Databases Reference
In-Depth Information
Tabl e 7 . Non-ambiguous UCI databases - average number of rules generated per
class
Database
KNN & KNN-BF
ARM
ARM-KNN-BF
Breast cancer
242
49
76
Car
302
N/A
71
258
57
169
Diabetes
32
5
18
Iris
200
N/A
48
Monks
23
N/A
18
Post-operation patient
438
N/A
61
Scale
743
N/A
42
Solar flares
334
8
70
Tic-Tac-Toe
297
N/A
57
Voting
41
7
30
Wine
63
7
5
Zoo
Tabl e 8 . Non-ambiguous UCI databases - classification accuracy
Database
KNN
c4.5rules
ARM
KNN-BF
ARM-KNN-BF
0.97
0.95
0.93
0.96
0.97
Breast cancer
0.92
0.93
N/A
0.93
0.93
Car
0.70
0.72
0.71
0.72
0.76
Diabetes
0.94
0.94
0.96
0.93
0.95
Iris
0.92
0.98
N/A
0.97
0.95
Monks
0.69
0.76
N/A
0.74
0.76
Post-operation patient
0.83
0.85
N/A
0.84
0.84
Scale
0.82
0.83
N/A
0.82
0.81
Solar flares
0.92
0.98
0.93
1.00
0.99
Tic-Tac-Toe
0.91
0.92
N/A
0.90
0.93
Voting
0.94
0.91
0.96
0.92
0.96
Wine
0.90
0.92
0.95
0.96
0.98
Zoo
Although the number of rules generated for the ARM classifier is signifi-
cantly less compared to others, it fails to handle class label ambiguities. Along
with the proposed ARM-KNN-BF classifier, the KNN and KNN-BF classifiers
are the only classifiers that are applicable in such a situation. Among these,
the ARM-KNN-BF classifier possesses a significantly fewer number of rules.
Tables 8 and 9 give the classification accuracy and the standard deviation
corresponding to these different UCI databases. For the ARM classifier, the
best average accuracy was used (i.e., CBA-CAR plus infrequent rules reported
in [18]). Table 8 shows that the ARM-KNN-BF classifier performs compar-
atively well with the other classifiers. Furthermore, it operates on a much
smaller rule set compared to the KNN and KNN-BF classifiers.
 
Search WWH ::




Custom Search