Databases Reference
In-Depth Information
8.4 Information Theory Revisited
In order to study the trade-offs between rate and the distortion of lossy compression schemes,
we would like to have rate defined explicitly as a function of the distortion for a given dis-
tortion measure. Unfortunately, this is generally not possible; and we have to go about it in
a more roundabout way. Before we head down this path, we need a few more concepts from
information theory.
In Chapter 2, when we talked about information, we were referring to letters from a single
alphabet. In the case of lossy compression, we have to deal with two alphabets, the source
alphabet and the reconstruction alphabet. These two alphabets are generally different from
each other.
Example8.4.1:
A simple lossy compression approach is to drop a certain number of the least significant
bits from the source output. We might use such a scheme between a source that generates
monochrome images at 8 bits per pixel and a user whose display facility can display only 64
different shades of gray. We could drop the two least significant bits from each pixel before
transmitting the image to the user. There are other methods we can use in this situation that
are much more effective, but this is certainly simple.
Suppose our source output consists of 4-bit words
. The source encoder
encodes each value by shifting out the least significant bit. The output alphabet for the
source coder is
{
0
,
1
,
2
,...,
15
}
. At the receiver we cannot recover the original value ex-
actly. However, we can get an approximation by shifting ina0astheleastsignificant bit or,
in other words, multiplying the source encoder output by two. Thus, the reconstruction alpha-
bet is
{
0
,
1
,
2
,...,
7
}
{
0
,
2
,
4
,...,
14
}
, and the source and reconstruction do not take values from the same
alphabet.
As the source and reconstruction alphabets can be distinct, we need to be able to talk
about the information relationships between two random variables that take on values from
two different alphabets.
8.4.1 Conditional Entropy
Let X be a randomvariable that takes values from the source alphabet
X ={
x 0 ,
x 1 ,...,
x N 1 }
.
Let Y be a random variable that
takes on values from the reconstruction alphabet
Y ={
y 0 ,
y 1 ,...,
y M 1 }
. From Chapter 2, we know that the entropy of the source and
the reconstruction are given by
N
1
H
(
X
) =−
P
(
x i )
log 2 P
(
x i )
i
=
0
Search WWH ::




Custom Search