Databases Reference
In-Depth Information
T A B L E 3 . 20
Reduced three-letter alphabet.
Letter
Probability
Codeword
a 3
0 . 55
α 2
a 5
0 . 25
c ( a 5 )
a 1
0 . 20
c ( a 1 )
T A B L E 3 . 21
Ternary code for six-letter
alphabet.
Letter
Probability
Codeword
a 1
0
.
20
2
a 2
0
.
05
021
a 3
0 . 20
00
a 4
0 . 20
01
a 5
0 . 25
1
a 6
0 . 10
020
0
1
2
a 5
a 1
0
1
2
a 3
a 4
0
1
a 6
a 2
F I GU R E 3 . 11
Code tree for the nonbinary Huffman code.
3.4 Adaptive Huffman Coding
Huffman coding requires knowledge of the probabilities of the source sequence. If this
knowledge is not available, Huffman coding becomes a two-pass procedure: the statistics are
collected in the first pass, and the source is encoded in the second pass. In order to convert
this algorithm into a one-pass procedure, Faller [ 23 ] and Gallagher [ 21 ] independently devel-
oped adaptive algorithms to construct the Huffman code based on the statistics of the symbols
already encountered. These were later improved by Knuth [ 24 ] and Vitter [ 25 ].
Theoretically, if we wanted to encode the ( k +1)th symbol using the statistics of the first
k symbols, we could recompute the code using the Huffman coding procedure each time a
symbol is transmitted. However, this would not be a very practical approach due to the large
amount of computation involved—hence, the adaptive Huffman coding procedures.
Search WWH ::




Custom Search