Database Reference
In-Depth Information
Table 7.7 Confusion Matrix of Naïve Bayes from the Bank Marketing Example
Predicted Class
Total
Subscribe Not Subscribed
Actual Class Subscribed
3 8
11
Not Subscribed 2
87 89
Total
5 95
100
The accuracy (or the overall success rate ) is a metric defining the rate at
which a model has classified the records correctly. It is defined as the sum of TP
and TN divided by the total number of instances, as shown in Equation 7.18 .
7.18
A good model should have a high accuracy score, but having a high accuracy score
alone does not guarantee the model is well established. The following measures can
be introduced to better evaluate the performance of a classifier.
As seen in Chapter 6, the true positive rate (TPR) shows what percent of
positive instances the classifier correctly identified. It's also illustrated in Equation
7.19 .
7.19
The false positive rate (FPR) shows what percent of negatives the classifier
marked as positive. The FPR is also called the false alarm rate or the type I
error rate and is shown in Equation 7.20 .
7.20
The false negative rate (FNR) shows what percent of positives the classifier
marked as negatives. It is also known as the miss rate or type II error rate and
is shown in Equation 7.21 . Note that the sum of TPR and FNR is 1.
Search WWH ::




Custom Search