Information Technology Reference
In-Depth Information
6. F. J. Provost and T. Fawcett, “Robust classification for imprecise environments,”
Machine Learning , vol. 42, no. 3, pp. 203-231, 2001.
7. M. Kubat, R. C. Holte, and S. Matwin, “Machine learning for the detection of oil
spills in satellite radar images,” Machine Learning , vol. 30, pp. 195-215, 1998.
8. G. Cohen, M. Hilario, H. Sax, S. Hugonnet, and A. Geissbuhler, “Learning from
imbalanced data in surveillance of nosocomial infection,” Artificial Intelligence in
Medicine , vol. 37, no. 1, pp. 7-18, 2006.
9. R. Ranawana and V. Palade, “Optimized precision: A new measure for classifier
performance evaluation,” in Proceedings of the IEEE Congress on Evolutionary Com-
putation (Vancouver, BC), pp. 2254-2261, IEEE Computer Society, 2006.
10. R. Batuwita and V. Palade, “A new performance measure for class imbalance learning.
Application to bioinformatics problems,” in ICMLA (Miami Beach, FL, USA), pp.
545-550, IEEE Computer Society, 2009.
11. V. Garcıa, R. A. Mollineda, and J. S. Sanchez, “Theoretical analysis of a perfor-
mance measure for imbalanced data,” in ICPR (Istanbul, Turkey), pp. 617-620, IEEE
Computer Society, 2010.
12. G. I. Webb and K. M. Ting, “On the application of ROC analysis to predict classifi-
cation performance under varying class distribution,” Machine Learning , vol. 58, pp.
25-32, 2005.
13. T. Fawcett and P. A. Flach, “A response to Webb and Ting's on the application of
ROC analysis to predict classification performance under varying class distribution',”
Machine Learning , vol. 58, pp. 33-38, 2005.
14. T. Landgrebe, P. Paclik, and R. P. W. Duin, “Precision-recall operating character-
istics (p-ROC) curves in imprecise environments,” in Proceedings of the Eighteenth
International Conference on Pattern Recognition (Hong Kong, China), pp. 123-127,
IEEE Computer Society, 2006.
15. J. Davis and M. Goadrich, “The relationship between precision-recall and ROC
curves,” in the Proceedings of the Twenty-Third International Conference on Machine
Learning , (Pittsburgh, Pennsylvania, USA), pp. 233-240, ACM, 2006.
16. D. J. Hand, “Measuring classifier performance: A coherent alternative to the area
under the ROC curve,” Machine Learning , vol. 77, no. 1, pp. 103-123, 2009.
17. P. Flach, J. Hernandez-Orallo, and C. Ferri, “A coherent interpretation of AUC as a
measure of aggregated classification performance,” in Proceedings of the 28th Inter-
national Conference on Machine Learning (ICML-11) (New York, NY, USA), pp.
657-664, Omnipress, 2011.
18. A. Cardenas and J. Baras, “B-ROC curves for the assessment of classifiers over
imbalanced data sets,” in Proceedings of the Twenty-First National Conference on
Artificial Intelligence (Boston, MA, USA), pp. 1581-1584, AAAI Press, 2006.
19. M. Keller, S. Bengio, and S. Y. Wong, “Benchmarking non-parametric statistical
tests,” in NIPS , (Vancouver, BC, Canada), pp. 464, MIT Press, 2005.
20. T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters , vol. 27,
pp. 861-874, 2006.
21. D. J. Hand and R. J. Till, “A simple generalisation of the area under the ROC curve
for multiple class classification problems,” Machine Learning , vol. 45, pp. 171-186,
2001.
Search WWH ::




Custom Search