Database Reference
In-Depth Information
ROC curve and AUC
The ROC curve is a concept similar to the PR curve. It is a graphical illustration of the true
positive rate against the false positive rate for a classifier.
The true positive rate ( TPR ) is the number of true positives divided by the sum of true
positives and false negatives. In other words, it is the ratio of true positives to all positive
examples. This is the same as the recall we saw earlier and is also commonly referred to as
sensitivity.
The false positive rate ( FPR ) is the number of false positives divided by the sum of false
positives and true negatives (that is, the number of examples correctly predicted as class
0). In other words, it is the ratio of false positives to all negative examples.
In a manner similar to precision and recall, the ROC curve (plotted in the following figure)
represents the classifier's performance tradeoff of TPR against FPR, for different decision
thresholds. Each point on the curve represents a different threshold in the decision function
for the classifier.
Search WWH ::




Custom Search