Information Technology Reference
In-Depth Information
6. N. Japkowicz, “Class imbalances versus small disjuncts,” ACM SIGKDD Explo-
rations , vol. 6, no. 1, pp. 40 - 49, 2004.
7. G. Weiss, “Learning with rare cases and small disjuncts,” in Proceedings of the Twelfth
International Conference on Machine Learning (Tahoe City, CA, USA), pp. 558 - 565,
Morgan Kaufmann, 1995.
8. R. Holte, L. Acker, and B. Porter, “Concept learning and the problem of small dis-
juncts,” in Proceedings of the Eleventh International Joint Conference on Artificial
Intelligence (Detroit, MI, USA), pp. 813 - 818, Morgan Kaufmann, 1989.
9. K. Ali and M. Pazzani, “HYDRA-MM: Learning multiple descriptions to improve
classification accuracy,” International Journal of Artificial Intelligence Tools , vol. 4,
pp. 97 - 122, 1995.
10. A. van den Bosch, T. Weijters, H. J. van den Herik, and W. Daelemans, “When
small disjuncts abound, try lazy learning: A case study,” in Proceedings of the Seventh
Belgian-Dutch Conference on Machine Learning (Tilburg, Netherlands), pp. 109 - 118,
Tilburg University, 1997.
11. K. Ting, “The problem of small disjuncts: Its remedy in decision trees,” in Proceed-
ings of the Tenth Canadian Conference on Artificial Intelligence , pp. 91 - 97, Morgan
Kaufmann, 1994.
12. G. Weiss and H. Hirsh, “A quantitative study of small disjuncts,” in Proceedings
of the Seventeenth National Conference on Artificial Intelligence (Austin, TX, USA),
pp. 665 - 670, AAAI Press, 2000.
13. G. Weiss and H. Hirsh, “A quantitative study of small disjuncts: Experiments and
results,” Tech. Rep. ML-TR-42, Rutgers University, 2000.
14. B. Liu, W. Hsu, and Y. Ma, “Mining association rules with multiple minimum
supports,” in Proceedings of the Fifth ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining (San Diego, CA, USA), pp. 337 - 341, ACM,
1999.
15. P. Riddle, R. Segal, and O. Etzioni, “Representation design and brute-force induc-
tion in a Boeing manufacturing design,” Applied Artificial Intelligence , vol. 8,
pp. 125 - 147, 1994.
16. J. Friedman, R. Kohavi, and Y. Yun, “Lazy decision trees,” in Proceedings of the
Thirteenth National Conference on Artificial Intelligence (Portland, OR, USA), pp.
717 - 724, AAAI Press, 1996.
17. A. Bradley, “The use of the area under the ROC curve in the evaluation of machine
learning algorithms,” Pattern Recognition , vol. 30, no. 7, pp. 1145 - 1159, 1997.
18. F. Provost and T. Fawcett, “Robust classification for imprecise environments,”
Machine Learning , vol. 42, pp. 203 - 231, 2001.
19. D. Hand, “Measuring classifier performance: A coherent alternative to the area under
the ROC curve,” Machine Learning , vol. 77, pp. 103 - 123, 2009.
20. C. van Rijsbergen, Information Retrieval . London: Butterworths, 1979.
21. C. Cai, A. Fu, C. Cheng, and W. Kwong, “Mining association rules with
weighted items,” in Proceedings of Database Engineering and Applications Sympo-
sium (Cardiff, UK), pp. 68 - 77, IEEE Computer Society, 1998.
22. C. Carter, H. Hamilton, and J. Cercone, “Share based measures for itemsets,” in Prin-
ciples of Data Mining and Knowledge Discovery , Lecture Notes in Computer Science
(J. Komorowski and J. Zytkow, eds.), vol. 1263, pp. 14 - 24, Berlin, Heidelberg/New
York: Springer-Verlag, 1997.
Search WWH ::




Custom Search