Digital Signal Processing Reference
In-Depth Information
1.6.2 Bayes' Rule
Let the space S be subdivided into N mutually exclusive events B j , j ¼ 1,
, N .
The probabilities of events B j are called a priori probabilities because they repre-
sent the probabilities of B j before the experiment is performed. Similarly, the
probabilities P { A | B j } are typically known prior to conducting the experiment, and
are called transition probabilities .
Now, suppose that the experiment is performed and, as a result, event A
occurred. The probability of the occurrence of any of the events B j —knowing
that the event A has occurred-is called a posteriori probability PfB j jAg .Thea
posteriori probability is calculated using conditional probability ( 1.43 ) and the
theorem of total probability ( 1.65 ) resulting in:
...
P f AB j g
PfAg ¼
PfAjB j gPfB j g
P
PfB j jAg¼
:
(1.70a)
N
1 PfAjB i gPfB i g
The formula in ( 1.70a ) is known as Bayes' rule .
Because ( 1.70a ) is a complicated formula, some authors like Haddad [HAD06,
p. 32] consider it is better to write Bayes' rule using the following two equations:
P f AB j g
PfAg ¼
PfAjB j gPfB j g
PfAg
PfB j jAg¼
PfAg¼ X
N
PfAjB i gPfB i g:
(1.70b)
1
1.7
Independent Events
Consider two events A and B with the nonzero probabilities P { A } and P { B }, and
that the occurrence of one event does not affect the occurrence of the other event.
That means that the conditional probability of event A , given B , is equal to the
probability of event A ,
PfAjBg¼PfAg:
(1.71)
Similarly, the conditional probability of event B , given A , is equal to the
probability of event B ,
PfBjAg¼PfBg:
(1.72)
Search WWH ::




Custom Search