Information Technology Reference
In-Depth Information
Table 6.1. Objective rule evaluation indices for classification rules used in this re-
search. P: Probability of the antecedent and/or consequent of a rule. S: Statistical
variablebasedonP. I: Information of the antecedent and/or consequent of a rule.
N: Number of instances included in the antecedent and/or consequent of a rule. D:
Distance of a rule from the others based on rule attributes.
Theory Index Name ( Abbreviation ) [Reference Number of Literature]
P
Coverage ( Coverage ), Prevalence ( Prevalence )
Precision ( Precision ), Recall ( Recall )
Support ( Support ), Specificity ( Specificity )
Accuracy ( Accuracy ), Lift ( Lift )
Leverage ( Leverage ), Added Value ( Added Value )[2]
Klosgen's Interestingness ( KI )[14], Relative Risk ( RR )[15]
Brin's Interest ( BI )[16], Brin's Conviction ( BC )[16]
Certainty Factor ( CF )[2], Jaccard Coecient ( Jaccard )[2]
F-Measure ( F-M )[17], Odds Ratio ( OR )[2]
Yule's Q ( YuleQ )[2], Yule's Y ( YuleY )[2]
Kappa ( Kappa )[2], Collective Strength ( CST )[2]
Gray and Orlowska's Interestingness weighting Dependency ( GOI )[18]
Gini Gain ( Gini )[2], Credibility ( Credibility )[19]
χ 2 Measure for One Quadrant ( χ 2 -M1 )[20]
χ 2 Measure for Four Quadrant ( χ 2 -M4 )[20]
S
I
J-Measure ( J-M )[21], K-Measure ( K-M )[4]
Mutual Information ( MI )[2]
Yao and Liu's Interestingness 1 based on one-way support ( YLI1 )[3]
Yao and Liu's Interestingness 2 based on two-way support ( YLI2 )[3]
Yao and Zhong's Interestingness ( YZI )[3]
N
Cosine Similarity ( CSI )[2], Laplace Correction ( LC )[2]
φ Coecient ( φ )[2], Piatetsky-Shapiro's Interestingness ( PSI )[22]
D
Gago and Bento's Interestingness ( GBI )[23]
Peculiarity ( Peculiarity )[24]
number of instances as the rule set. Each instance has 40 attributes, including
those of the class.
We applied five learning algorithms to these datasets to compare their per-
formances as rule evaluation model learning methods. We used the following
learning algorithms from Weka [25]: C4.5 decision tree learner [26] called J4.8,
neural network learner with back propagation (BPNN)[27], support vector ma-
chines (SVM) 1 [28], classification via linear regressions (CLR) 2 [29], and OneR
[30]. In addition, we also used the following selective meta-learning algorithms:
Bagging [5], Boosting [6] and Stacking 3 [7].
6.4.1
Constructing Rule Evaluation Models for the Meningitis
Data Mining Result
In this case study, we considered 244 rules, which had been mined from six
datasets about six types of diagnostic problems as shown in Table 6.2. In these
1 A polynomial kernel function was used.
2 We set up the elimination of collinear attributes and the model selection with a
greedy search based on the Akaike information metric.
3 This stacking took the other seven learning algorithms as base-level learner and J4.8
as a meta-level learner.
 
Search WWH ::




Custom Search