Image Processing Reference
Entropy coding sounds like it's complicated. Implementing it well is a nontrivial task but
as a concept it is quite simple.
If you recall how Samuel Morse designed his telegraphic code in 1838, you may
remember that he first analyzed the frequency that different letters occurred in English-
language texts. Morse found that the most commonplace letter was “E”, and so he allo-
cated the shortest possible code to it. The least often-used letter, “Z,” was given a much
longer code. This accomplished some compression of the message transmission because
the more common parts of the message were coded with shorter representations.
Entropy coding does the same sort of thing for the DCT-encoded macroblock. Taking
the linear arrangement of the sequence of coefficients, the message is parsed and con-
verted into tokens that represent structures within the coded block. These tokens are
selected so that the shorter tokens (measured in bits) describe the most common structures
within the image data.
To understand the minutiae of entropy coding, you should consult one of the topics
on the theory of video compression. A useful keyword to search for on the Web is
“Huffman,” which is the name of the entropy-encoding technique.
If 100% of the entropy-encoded data is transferred to the decoder and is available for
reconstruction of the image, then the codec is said to be lossless. The transfer route may
require some physical reduction in the amount of data being delivered and so a truncation
may happen. This truncation of entropy-coded data reduces detail in the image and the
codec is therefore lossy.
Judging the optimum amount of entropy-coded data to discard is where the craft
skill of the compression expert is concentrated. This is more of an art than a science and
after you have compressed a significant amount of video, you will develop some instincts
for the best settings given the source content that you observe.
Setting the truncation too harshly is probably the single most popular reason why
coded video looks noisy. The artifacts become visible very easily and the effect is particu-
larly noticeable because it does not remain stationary. Notice the coding artifacts around
the edges of caption texts, for example.
If you remember the comment in the introductory paragraphs of Chapter 1, I sug-
gested that you might be able to entropy-code your working day. The crux of the concept
revolves around you completing all your tasks by the middle of the afternoon and having
nothing else to do that day. So you might as well cut the day short and go home early. You
just entropy coded your day in the office!
Motion compensation is another opportunity for reducing the data to be transmitted; it is
applied after creating the macroblocks.
It is very likely that other blocks are identical. That data reduction should already
have taken place. With a little more work, it might be possible to find a macroblock that is