Graphics Reference
In-Depth Information
Table 4.3 Classifiers used by categories
Method
Acronym
References
Rule Induction Learning
C4.5
C4.5
[ 73 ]
Ripper
Ripper
[ 16 ]
CN2
CN2
[ 14 ]
AQ-15
AQ
[ 59 ]
PART
PART
[ 33 ]
Slipper
Slipper
[ 15 ]
Scalable Rule Induction Induction
SRI
[ 68 ]
Rule Induction Two In One
Ritio
[ 100 ]
Rule Extraction System version 6
Rule-6
[ 67 ]
Black Box Methods
Multi-Layer Perceptron
MLP
[ 61 ]
C-SVM
C-SVM
[ 25 ]
ν
-SVM
ν
-SVM
[ 25 ]
Sequential Minimal Optimization
SMO
[ 70 ]
Radial Basis Function Network
RBFN
[ 8 ]
RBFN Decremental
RBFND
[ 8 ]
RBFN Incremental
RBFNI
[ 69 ]
Logistic
LOG
[ 10 ]
Naïve-Bayes
NB
[ 21 ]
Learning Vector Quantization
LVQ
[ 7 ]
Lazy Learning
1-NN
1-NN
[ 57 ]
3-NN
3-NN
[ 57 ]
Locally Weighted Learning
LWL
[ 2 ]
Lazy Learning of Bayesian Rules
LBR
[ 103 ]
As shown here all the detailed accuracy values for each fold, data set, imputation
method and classifier would be too long, we have used Wilcoxon's Signed Rank test
to summarize them. For each classifier, we have compared every imputation method
alongwith the rest in pairs. Every time the classifier obtains a better accuracy value for
an imputation method than another one and the statistical test yield a p
1
we count it as a win for the former imputation method. In another case it is a tie when
p
value
<
0
.
1.
In the case of rule induction learning in Table 4.4 we show the average ranking
or each imputation method for every classifier belonging to this group. We can
observe that, for the rule induction learning classifiers, the imputationmethods FKMI,
SVMI and EC perform best. The differences between these three methods in average
rankings are low. Thus we can consider that these three imputation methods are
the most suitable for this kind of classifier. They are well separated from the other
value
>
0
.
 
Search WWH ::




Custom Search