Information Technology Reference
In-Depth Information
used by CAMLET to allow it to construct a more appropriate learning algorithm
for a given dataset.
6.5
Conclusion
In this paper, we described the evaluation of the nine learning algorithms for a
rule evaluation support method using rule evaluation models to predict evalua-
tions for an if-then rule based on objective indices by re-using the evaluations
made by a human expert.
Based on a performance comparison of the nine learning algorithms for the
dataset from the result of meningitis data mining, the rule evaluation mod-
els achieved higher accuracies than just predicting the majority class. For this
dataset, the learning algorithm constructed by CAMLET presented higher ac-
curacy with higher reliability than the other eight learning algorithms, including
three selective meta-learning algorithms. For the datasets of rule sets obtained
from eight UCI datasets, although committee type learners such as SVM and
CLR, and Stacking failed to reach the percentage of the majority class of some
datasets, the other learning algorithms were able to go to or beyond the per-
centages of the majority class of each dataset with smaller than 50% of each
training dataset. Thus, our constructive meta-learning scheme has shown its
higher flexibility for different class distributions based on various criteria.
Considering the difference between the actual evaluation labeling and the
artificial evaluation labeling, it was shown that the evaluation of the medical
expert considered the particular relations between an antecedent and a class,
or another antecedent, in each rule. These results indicated that our approach
could detect human criteria differences as several performance differences of rule
evaluation models.
In the future, we will improve CAMLET's method repository to construct
suitable learning algorithms for rule evaluation models. We will also apply this
rule evaluation support method to other datasets from various domains.
References
1. Hilderman, R.J., Hamilton, H.J.: Knowledge Discovery and Measure of Interest.
Kluwer Academic Publishers, Dordrecht (2001)
2. Tan, P.N., Kumar, V., Srivastava, J.: Selecting the Right Interestingness Measure
for Association Patterns. In: Proceeding of International Conference on Knowledge
Discovery and Data Mining KDD 2002, pp. 32-41 (2002)
3. Yao, Y.Y., Zhong, N.: An Analysis of Quantitative Measures Associated with Rules.
In: Proceeding of Pacific-Asia Conference on Knowledge Discovery and Data Min-
ing PAKDD 1999, pp. 479-488 (1999)
4. Ohsaki, M., Kitaguchi, S., Kume, S., Yokoi, H., Yamaguchi, T.: Evaluation of Rule
Interestingness Measures with a Clinical Dataset on Hepatitis. In: Boulicaut, J.-
F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) PKDD 2004. LNCS (LNAI),
vol. 3202, pp. 362-373. Springer, Heidelberg (2004)
5. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123-140 (1996)
Search WWH ::




Custom Search