Databases Reference
In-Depth Information
Arithmetic Coding
4.1 Overview
In the previous chapter, we saw one approach to generating variable-length codes.
In this chapter, we see another, increasingly popular, method of generating
variable-length codes called arithmetic coding . Arithmetic coding is especially
useful when dealing with sources with small alphabets, such as binary sources,
and alphabets with highly skewed probabilities. It is also a very useful approach
when, for various reasons, the modeling and coding aspects of lossless compression are to be
kept separate. In this chapter, we look at the basic ideas behind arithmetic coding, study some
of the properties of arithmetic codes, and describe an implementation.
4.2 Introduction
In the last chapter, we studied the Huffman coding method, which guarantees a coding rate R
within 1 bit of the entropy H . Recall that the coding rate is the average number of bits used
to represent a symbol from a source, and, for a given probability model, the entropy is the
lowest rate at which the source can be coded. We can tighten this bound somewhat. It has been
shown [ 21 ] that the Huffman algorithm will generate a code whose rate is within p max +
086
of the entropy, where p max is the probability of the most frequently occurring symbol. We
noted in the last chapter that, in applications where the alphabet size is large, p max is generally
quite small, and the amount of deviation from the entropy, especially in terms of a percentage
of the rate, is quite small. However, in cases where the alphabet is small and the probability
0
.
 
 
Search WWH ::




Custom Search