Information Technology Reference
In-Depth Information
Definition of evidence strength: we define strength of evidence (or a test) as the probability
that the evidence gives true positive divided by the probability that the evidence gives a
false positive. In other words, it can be represented as the following formula:
strength(evidence) = P(Posi|Cause) / P(Posi|~Cause)
(Formula 5)
One thing to point out is that the summation of the probability of P(Posi|Cause) and the
probability of P(Posi|~Cause) is not necessarily 1. Once defined evidence strength, we can
divide evidence two categories: positive evidence and negative evidence. When the value of
strength is greater than 1, the evidence will shift our belief in the positive way, thus we
name it positive evidence; on the other hand, when the value of strength is smaller than 1,
the evidence has the effect of shift our belief in the negative way, thus we name it negative
evidence.
The probability P(Posi | Cause) on the right side of Formula 5 captures the causality
relationship in the real world. It means the probability of something causes the evidence
(test) to be positive. In our Example 1, it will take the form: P(positive x-ray | cancer), and it
means that the probability of lung cancer causes the x-ray to be positive; and P(positive x-
ray | ~cancer) means the probability of a false alarm.
Now, let's give some observations about evidence. First, as mentioned before, to be effective
evidence, the value of a test's positive conditional probability cannot have the same value as
its negative conditional probability. Thus, in terms of strength, we have the following
observation:
Observation 1: when the evidence strength is 1, it is not good evidence. Using the above
definition, the effectiveness of a test (or a piece of evidence) is measured in terms of its
strength. If the value of strength is 1, then the test is useless as a piece of evidence (it is
neutral). When the value of strength is greater than 1, it is positive evidence (seeing the
evidence will shift our view regarding the trueness of the event “Cause” to the positive
side); when the value of strength is smaller than 1 and greater than 0, it is negative evidence
(diminishes our view about the trueness of the “Cause”).
For example, if we are asked whether flipping a fair coin is a good test for predicting a
person has lung cancer (assume that a head means the person has cancer and a tail means
the person has no cancer)? We can proceed like the following:
1.
First, we calculate the strength of flipping a coin as a test and it will be:
strength(flipping a coin) = P(head | cancer) / P(head | ~cancer) = 0.5 / 0.5 = 1
Note: the reason that P(head | cancer) = 0.5 is the fact that the information of a patient has
cancer has nothing to do with the outcome of flipping a coin. The chance of getting a head is
still governed by its old chance of 50%. We will have the same argument for the probability
P(head | ~ cancer).
2.
Based on our evidence theory, we know it shifts our belief to the same distance for
positive and negative direction. Thus, we conclude that it's not a good test.
With regard to the cause of strong evidence, we have the following observation:
Search WWH ::




Custom Search