Biomedical Engineering Reference
In-Depth Information
lence, it would still achieve an accuracy of around 64% by chance because
it would diagnose disease D on 80% of occasions, and on 80% of occasions
disease D would be present.
Citing accuracy alone also ignores differences between types of errors. If
a decision support system erroneously diagnoses disease D in a healthy
patient, this false-positive error may be less serious than if it pronounces
that the patient is suffering from disease E, or that a patient suffering from
disease D is healthy, a false-negative error. More complex errors can occur
if more than one disease is present, or if the decision support system issues
its output as a list of diagnoses ranked by probability. In this case, includ-
ing the correct diagnosis toward the end of the list is less serious than omit-
ting it altogether, but is considerably less useful than if the correct diagnosis
is ranked among the top three. 1
The disadvantages of citing accuracy rates alone can be largely overcome
by using a contingency table to compare the output given by the informa-
tion resource against the gold standard, which (as discussed in Chapter 4)
is the accepted value of the truth. This method allows the difference
between false-positive and false-negative errors to be made explicit.
As shown in Table 8.2, 2 errors can be classified as false positive (FP) or
false negative (FN). Table 8.2 illustrates different indices based on the rates
of occurrence of these errors. Sensitivity and specificity, related to the false
negative and false positive rates respectively, are most commonly used. In
a field study where an information resource is being used, care providers
typically know the output and want to know how often it is correct, or they
suspect a disease and want to know how often the information resource
correctly detects it. In this situation, some care providers find the predic-
tive value positive and the sensitivity, also known as the detection rate, 3
intuitively more useful than the false-positive and false-negative rates. The
positive predictive value has the disadvantage that it is highly dependent
on disease prevalence, which may differ significantly between the test cases
used in a study and the clinical environment in which an information
resource is deployed.
Sensitivity and positive predictive value are particularly useful, however,
with information resources that issue alarms, as the accuracy, specificity, and
TABLE 8.2. Example of a contingency table.
Gold standard
Decision-aid's advice
Attribute present
Attribute absent
Totals
Attribute present
TP
FP
TP + FP
Attribute absent
FN
TN
FN + TN
Total
TP + FN
FP + TN
N
TP, true positive; FP, false positive; FN, false negative; TN, true negative.
Accuracy: (TP + TN)/N; false-negative rate: FN/(TP + FN); false-positive rate: FP/(FP + TN);
positive predictive value: TP/(TP +
FP); negative predictive value: TN/(FN + TN); detection
rate (sensitivity): TP/(TP +
FN); specificity: TN/(FP + TN).
 
Search WWH ::




Custom Search