Information Technology Reference
In-Depth Information
different. With current technology, we can delegate computers to reason with low level
entities (in the low-left of Figure 1) in the knowledge management hierarchy. This is because
at lower levels, reasoning is more objective and concrete. The theoretical foundations of the
reasoning at lower levels can be captured by the Bayes' theorem.
4. Theoretical foundation of computer reasoning
The insight that we get from the above discussion on raw data, information, knowledge, and
wisdom tells us that reasoning at lower levels is easier than reasoning at the highest level.
Since at the lower levels, we only need to deal with knowledge finding (data mining) and
the application of the appropriate knowledge to some evidences. At the highest level (the
killer idea level), we even do not know the mechanism that produces creative ideas;
therefore, it will be much harder to reason at this level. As mentioned in the previous
section, the theoretical foundation of computer is Bayes' theorem. Let's investigate what is
the Bayes' theorem? Bayes' theorem can be expressed as Formula 1:
( | ) ∗()
( | ) ( ) ( | ~ ) ∗(~)
(|)=
(Formula 1)
The notation P(A|X) means the probability (or the chance) that the event A will happen
given the evidence (or the observation) of X. In probability theory, this is called conditional
probability. Depending on the quality of evidence X, the probability of event A happening
may be heavily affected by the presence of the evidence X.
The symbol “~” means complement, that is, the opposite of what follows it. For example, if
P(A) means the probability of event A will happen, then P(~A) means the probability of
event A will not happen. One thing to point out is that there are three pieces in Formula 1:
the reasoning about the occurrence of an event A (the left side of the equation), the evidence
(X), and the causality relationship between the evidence X and the event A (embodied by
P(X|A) and P(X|~A)).
In a nutshell, Formula 1 says that if we see a piece of evidence X, we can reason about the
chance of event A's occurrence given that the evidence X and the event A has a causality
relationship. This is exactly the behavior that a rational person will display given a piece of
evidence related to the event. Formula 1 can be extended to include two, three, …, and
many pieces of evidence. All we need to do is to apply the formula multiple times. For
example, if both X and Y contribute to the occurrence of event A, we can calculate the final
probability of event A by applying Formula 1 to get the probability of A given evidence X.
Then, we use the result to apply Formula 1 again. Only this time, we should use the result in
the first iteration to substitute the prior probability P(A), and P(~A). Actually, we can
repeatedly apply Formula 1 to reason any number of evidences.
To get a better handle on how the Bayes' theorem works, let's work through a concrete
example. Suppose that we have the following problem statement:
Example 1: “Lung cancer is the leading cause of cancer death in the United States.”
(Williams, 2003, p. 463) Suppose that about 0.2% of the population living in US with age
above 20 has lung cancer. When doing an annual check, suppose that 85% of the people with
lung cancer will show positive for the chest x-ray test. On the other hand, chest x-ray will
have false alarms: 6% of the people without lung cancer will also show positive for the chest
Search WWH ::




Custom Search