Cryptography Reference
In-Depth Information
information loss occurs. We can design a good quantization scheme that only
the information that is less important to the HVS will be suppressed.
At the decoder side, only the quantized error information, e
n , is available.
We need to then de-quantize the error, and the reconstructed signal will be,
= f n +e n .
n
f
(2.5)
It can be seen that, by letting Pr(f n−1 ,f n−2 , ..., f n−i )=f n−1 , Fig. 2.3
becomes the differential pulse code modulation (DPCM). Therefore DPCM is
actually the simplified version of the predictive coding, such that it uses the
previous pixel value as the prediction for the current one.
The choice of predicator determines the e ciency of the predictive coding.
A good predicator will compress the data more effectively. Also we can adap-
tively adjust the prediction coe cients and the quantization level so that they
are tailored to the local image contents. A predictive coding is called adaptive
when the prediction coe cients or the quantization levels vary with the image
contents.
2.2.2 Huffman Coding
Huffman coding and the arithmetic coding (to be introduced in next section)
are both entropy based variable length coding, which can achieve a higher
data density than fixed length code if the symbols in the signal source differ
in frequency of occurrence. The length of the encoded symbol is inversely
proportional to its occurring frequency.
David Huffman [6] in 1954 designed a lossless coding procedure that assigns
a shorter bits to the more probable symbols and more bits to less probable
symbols. He wasnt the first to discover this, but his paper presented the
optimal solution for assigning these codes, which remains the most popular
algorithm today and we can find it in many state-of-the-art data compression
systems.
Huffman coding involves two steps, the source reduction step following by
codeword construction step. During source reduction, we need firstly to sort
the symbols in the order of their probability, and combine the probabilities of
the lowest two probable symbols to form a parent symbol whose probability is
the sum of the two child ones. This process will continue until a reduced source
with only 2 parent symbols is achieved. Fig. 2.4 shows an example of Huffman
coding. The top diagram shows the source reduction process, while the bottom
diagram shows the code assignment process. This diagram is sometimes called
a Huffman code tree.
As is in Fig. 2.4, after Huffman coding, the code length of the two least
probable symbols Z and Q is 5, while for the most probable symbol S, it is
coded with only one bit. We know that for a source of 6 symbols, the average
code length should be log 2 6≈3. Using Huffman coding, the average code
length for the above example can be calculated as 5(0.01 + 0.02) + 4
0.05 + 30.07 + 20.38 + 10.47 = 1.79 bits/symbol.
Search WWH ::




Custom Search