Information Technology Reference
In-Depth Information
17. T. K. Ho, “The random subspace method for constructing decision forests,” IEEE
Transactions on Pattern Analysis and Machine Intelligence , vol. 20, no. 8, pp.
832-844, 1998.
18. L. Breiman, “Random forests,” Machine Learning , vol. 45, no. 1, pp. 5-32, 2001.
19. E. Bauer and R. Kohavi, “An empirical comparison of voting classification algorithms:
Bagging, boosting, and variants,” Machine Learning , vol. 36, no. 1, pp. 105-139,
1999.
20. N. V. Chawla, A. Lazarevic, L. O. Hall, and K. W. Bowyer. “Smoteboost: Improving
prediction of the minority class in boosting,” in Proceedings of the Principles of
Knowledge Discovery in Databases, PKDD-2003 , (Cavtat-Dubrovnik, Croatia), vol.
2838, pp.107-119, Springer-Verlag, 2003.
21. H. Guo and H. L. Viktor, “Learning from imbalanced data sets with boosting and data
generation: The Databoost-IM approach,” SIGKDD Explorations Newsletter , vol. 6,
no. 1, pp. 30-39, 2004.
22. P. Radivojac, N. V. Chawla, A. K. Dunker, and Z. Obradovic, “Classification and
knowledge discovery in protein databases,” Journal of Biomedical Informatics , vol.
37, no. 4, pp. 224-239, 2004.
23. X. Y. Liu, J. Wu, and Z. H. Zhou. “Exploratory under-sampling for class-imbalance
learning”, in ICDM '06: Proceedings of the Sixth International Conference on Data
Mining , pp. 965-969 Washington, DC: IEEE Computer Society, 2006.
24. S Hido and H. Kashima, “Roughly balanced bagging for imbalanced data,” in SDM ,
pp. 143-152. SIAM, 2008.
25. T. Hoens and N. Chawla, “Pacific-Asia Conference on Advances in Knowledge
Discovery and Data Mining,” in PAKDD , (Hyderabad, India), vol. 6119, pp. 488-499,
Springer-Verlag, 2010.
26. D. A. Cieslak, T. R. Hoens, N. V. Chawla, and W. P. Kegelmeyer, “Hellinger dis-
tance decision trees are robust and skew-insensitive,” Data Mining and Knowledge
Discovery , (Hingham, MA, USA), vol. 24, no. 1, pp. 136-158, Kluwer Academic
Publishers, 2012.
27. W. Fan, S. J. Stolfo, J. Zhang, and P. K. Chan. “Adacost: Misclassification cost-
sensitive boosting,” in Machine Learning-International Workshop then Conference ,
(Bled, Slovenia), pp. 97-105, Morgan Kaufmann, 1999.
28. K. M. Ting. “A comparative study of cost-sensitive boosting algorithms,” in Proceed-
ings of the 17th International Conference on Machine Learning , 2000.
29. D. A. Cieslak and N. V. Chawla. “Learning decision trees for unbalanced data,” in
European Conference on Machine Learning (ECML) , (Antwerp, Belgium), vol. 5211,
pp. 241-256, Springer-Verlag, 2008.
30. J. R. Quinlan. C4. 5: Programs for Machine Learning . San Mateo, CA: Morgan
Kaufman Publishers, Inc., 1993.
31. F. Provost and P. Domingos, “Tree induction for probability-based ranking,” Machine
Learning , vol. 52, no. 3, pp. 199-215, 2003.
32. J. A. Swets, “Measuring the accuracy of diagnostic systems,” Science , vol. 240, no.
4857, pp. 1285-, 1988.
33. J. P. Egan. Signal Detection Theory and ROC Analysis . New York: Academic Press,
1975.
Search WWH ::




Custom Search