Information Technology Reference
In-Depth Information
to a single scalar. It does so in a more sophisticated way than the G-mean, as it
allows the user to weigh the contribution of each component as desired. More
specifically, the F -measure is the weighted harmonic mean of precision and
recall, and is formally defined as follows: For any α R , α> 0,
( 1 + α) [precision × recall]
[ α × precision] + recall
F α
=
(8.7)
where α typically takes the values of 1, 2, or 0.5 to signify that precision and
recall are equal in the case of α = 1, that recall weighs twice as much as precision
when α = 2 and that precision weighs twice as much as recall when α = 0 . 5.
The F -measure is very popular in the domains of information retrieval or text
categorization.
We now get back to multi-class focus metrics but show how their shortcomings
in the case of class imbalances can be overcome with appropriate weightings.
8.3.5 Macro-Averaged Accuracy
The macro-averaged accuracy (MAA) is presented in [3] and is calculated as the
arithmetic average of the partial accuracies of each class. Its formula is given as
follows for the binary case ([3] gives a more general definition for multi-class
problems):
sensitivity + specificity
2
MAA =
(8.8)
In this formulation, we see that no matter what the frequency of each class is,
the classifier's accuracy on each class is equally weighted.
The second macro-averaged accuracy presented in [3] is the macro-averaged
accuracy calculated as the geometric average of the partial accuracies of each
class and is nothing but the G-mean that was already presented in Section 8.3.3.
Both measures were shown to perform better than other threshold metrics in
class-imbalanced cases in the experiments from [3] that we related in Section 8.2.
8.3.6 Newer Combinations of Threshold Metrics
Up to this point, we have only discussed the most common threshold metrics
that deal with the class imbalance problem. In the past few years, a number of
newer combinations of threshold metrics have been suggested to improve on the
ones previously used in the community. We discuss them now.
8.3.6.1 Mean-Class-Weighted Accuracy The mean-class-weighted accuracy
[8] implements a small modification to the macro-average accuracy metric
just presented. In particular, rather than giving sensitivity and specificity equal
weights, Cohen et al. [8] choose to give the user control over each component's
weight. More specifically, Cohen et al. [8] who applied machine learning
Search WWH ::




Custom Search