Digital Signal Processing Reference
In-Depth Information
11.3 Markov Source
Further opportunities for optimization arise when the proba-
bility of each successive outcome or symbol is dependent upon
previous outcomes. An obvious example is that if the n th letter is
a “q”, you can be pretty sure the next letter will be “u”. Another
example is that if the n th letter is a “t”, this raises the probability
that the following letter will be an “h”. Data sources that have this
kind of dependency relationship between successive symbols are
known as Markov sources. This can lead to more sophisticated
encoding schemes. Common groups of letters with high proba-
bilities can be mapped to specific codewords. Examples are the
letter pairs “st” and “tr”, or “the”.
This can occur when we map the probabilities of a given letter
based on the few letters preceding. In essence, we are making
predictions based on the probability of certain letter groups
appearing in any given construct of the English language. Of
course, different languages, even if using the same alphabet, will
have a different set of multi-letter probabilities, and therefore
different codeword mappings. An easy example in the United
States is the sequence of letters: HAWAI_. Out of 26 possibilities, it
is very likely the next letter is an “I”.
In a first order Markov source, a given symbol's probability is
dependent upon the previous symbol. In a second order Markov
source, a given symbol's probability is dependent upon the
previous two symbols, and so forth. The average entropy of
a symbol tends to decrease as the order of the Markov source
increases. The complexity of the system also increases as the
order of the Markov source increases.
A first order binary Markov source is described in Figure 11.1 .
The transition probabilities depend only on the current state.
In this case, the entropy can be found as the weighted sum of
the conditional entropies corresponding to the transitional
probabilities of the state diagram. The probability of a zero given
the previous state is a one is described as P(0
j
1).
P(1|0) = 0.2
P(0|0) = 0.8
State = 0
State = 1
P(1|1) = 0.6
P(0|1) = 0.4
Figure 11.1. Markov diagram.
Search WWH ::




Custom Search