Digital Signal Processing Reference
In-Depth Information
Consider a message block composed of N successive symbols, where N is user-
selected and presumably large enough but otherwise arbitrary. The N symbols are
transmitted over an error-prone channel which includes, in our context, background
noise and intersymbol interference. In general, the channel output will be a corrupted
version of its input.
As an illustrative example, suppose we have a code book comprising two messages
(each containing N symbols). The first is sent when we wish to transmit a zero, the
second to transmit a one. If the two messages are initially quite similar, then the chan-
nel distortion will likely blur any distinction between them, obscuring the information
they are supposed to encode from the channel output. If, conversely, the two messages
are quite disparate in form, then their distorted versions at the channel output stand a
better chance of remaining distinguishable, allowing the receiver to confidently read a
zero or a one. This is the simplest example of code design, aiming to transmit one bit of
information per message. The salient feature is not so much that the channel output
match the channel input, but rather that distinct messages at the channel input
remain distinguishable at the channel output.
The operational definition of channel capacity relates to the maximum number of
messages, say m , that are distinguishable (with high enough probability) at the channel
output. The logarithm to base 2 of this quantity, say K ¼ log 2 m , is the number of bits
required of a binary counter which enumerates these distinguishable messages. This is,
in effect, the number of bits that each message can convey—take K information bits
from a source, and interpret them as the bits of a K -bit binary counter. The value of
the counter is the index of a message to send over the channel. Provided the receiver
can distinguish separate messages, it need only identify the index of the message sent,
whose K -bit binary representation contains the K -information bits that were to be com-
municated. Since each message is comprised of N symbols, the effective communi-
cation rate is K / N bits per channel use on average.
So what is this maximum number of distinguishable messages, or better still, its
logarithm? The analytic solution was derived by Shannon [8] as
Capacity ¼ max
Pr( d ) I ( d ; y )
(in bits per message sent)
in which the mutual information I( d ; y ) is maximized over all probability distributions
Pr( d ) of the input, where d and y are vectors containing the input and output
sequences, respectively. The mutual information expression varies according to
whether the channel inputs and outputs are discrete or continuous. If both inputs
and outputs are discrete, then
I ( d ; y ) ¼ X
d , y
Pr( d , y )
Pr( d )Pr( y )
Pr( d , y ) log 2
whereas if both are continuous, then
ð Pr( d , y )log 2
Pr( d , y )
Pr( d )Pr( y ) dd dy
I ( d ; y ) ¼
 
Search WWH ::




Custom Search