Information Technology Reference
In-Depth Information
REFERENCES
1. H. He and E. A. Garcia, “Learning from imbalanced data sets,” IEEE Transactions
on Knowledge and Data Engineering , vol. 21, no. 9, pp. 1263-1284, 2009.
2. N. Japkowicz, (ed.), “Learning from imbalanced data sets,” American Association for
Artificial Intelligence (AAAI) Workshop Technical Report WS-00-05, 2000.
3. N. V. Chawla, N. Japkowicz, and A. Kolcz, (eds.), Workshop on learning from imbal-
anced data sets II, in Proceedings of International Conference on Machine Learning ,
2003.
4. N. V. Chawla, N. Japkowicz, and A. Kolcz, “Editorial: Special issue on learning
from imbalanced data sets,” ACM SIGKDD Explorations Newsletter , vol. 6, no. 1,
pp. 1-6, 2004.
5. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Syn-
thetic minority over-sampling technique,” Journal of Artificial Intelligence Research ,
vol. 16, pp. 321-357, 2002.
6. H. Guo and H. L. Viktor, “Learning from imbalanced data sets with boosting and data
generation: The DataBoost-IM approach,” ACM SIGKDD Explorations Newsletter ,
vol. 6, no. 1, pp. 30-39, 2004.
7. K. Woods, C. Doss, K. Bowyer, J. Solka, C. Priebe, and W. Kegelmeyer, “Compara-
tive evaluation of pattern recognition techniques for detection of microcalcifications
in mammography,” International Journal of Pattern Recognition and Artificial Intel-
ligence , vol. 7, no. 6, pp. 1417-1436, 1993.
8. R. B. Rao, S. Krishnan, and R. S. Niculescu, “Data mining for improved cardiac
care,” ACM SIGKDD Explorations Newsletter , vol. 8, no. 1, pp. 3-10, 2006.
9. A. Estabrooks, T. Jo, and N. Japkowicz, “A multiple resampling method for learning
from imbalanced data sets,” Computational Intelligence , vol. 20, no. 1, 18-36, 2004.
10. C. Drummond and R. C. Holte, “C4.5, class imbalance, and cost sensitivity: Why
under-sampling beats over-sampling,” in Proceedings of International Conference
Machine Learning, Workshop on Learning from Imbalanced Data Sets II , 2003.
11. H. Han, W. Y. Wang, and B. H. Mao, “Borderline-SMOTE: A new over-sampling
method in imbalanced data sets learning,” in Proceedings of International Conference
on Intelligent Computing (Hefei, China), Springer, pp. 878-887, 2005.
12. H. He, Y. Bai, E. A. Garcia, and S. Li, “ADASYN: Adaptive synthetic sampling
approach for imbalanced learning,” in Proceedings of International Joint Conference
on Neural Networks (Hong Kong, China), IEEE, pp. 1322-1328, 2008.
13. M. Kubat and S. Matwin, “Addressing the curse of imbalanced training sets: One-
sided selection,” in Proceedings of International Conference on Machine Learning ,
pp. 179-186, 1997.
14. T. Jo and N. Japkowicz, “Class imbalances versus small disjuncts,” ACM SIGKDD
Explorations Newsletter , vol. 6, no. 1, pp. 40-49, 2004.
15. N. V. Chawla, A. Lazarevic, L. O. Hall, and K. W. Bowyer, “SMOTEBoost: Improv-
ing prediction of the minority class in boosting,” in Proceedings of Principles on
Knowledge Discovery Databases , pp. 107-119, 2003.
16. D. Mease, A. J. Wyner, and A. Buja, “Boosted classification trees and class probabil-
ity/quantile estimation,” Journal of Machine Learning Research , vol. 8, pp. 409-439,
2007.
Search WWH ::




Custom Search