Cryptography Reference
In-Depth Information
received sequence would be 010 and the decoder would incorrectly decide that a
zero was transmitted.
In this example, one may calculate the probability that a given source bit is
received in error. It is the probability that at least 2 of a sequence of 3 bits is received
incorrectly, where the probability of a given bit being incorrect is 1 / 4 and the bits
are transmitted independently. The corresponding error probability Pr[ error ] (i.e.,
the probability of incorrectly receiving
2 bits) may be computed as follows:
Pr[ error ]= 3
2
1
4
2 3
4 + 1
3
10
64
=
4
Obviously, 10 / 64 < 1 / 4, and the error probability is reduced considerably.
There is, however, a price to pay for this reduction: the sequence to be transmitted is
three times as long as the original one. This means that if one wants to synchronize
the source with the channel, then one must slow down the rate of the source to 1 / 3
bit per second (while keeping the channel rate fixed at 1 bit per second).
This procedure can be generalized. Let β< 1 / 2 be the error probability for
each bit and each bit be represented by a sequence of 2 n +1bits. 1 Hence, the
effective transmission rate of the source is reduced to 1 / (2 n +1)bits per second. In
either case, a majority selector is used at the receiving end. The probability Pr[ error ]
of incorrectly decoding a given sequence of 2 n +1bits is equal to the probability of
having n +1or more bits in error. This probability can be computed as follows:
2 n +1
k
β k (1
2 n +1
β ) 2 n +1 −k
Pr[ error ]=
k = n +1
It can be shown that lim n→∞ Pr[ error ]=0, meaning that the probability of
incorrectly decoding a given sequence of 2 n +1bits can be made arbitrarily small
for sufficiently large n . Put in other words: one can reduce the error probability to
an arbitrarily small value at the expense of decreasing the effective transmission rate
toward zero.
The essence of the fundamental theorem of information theory is that in order
to achieve arbitrarily high reliability, it is not necessary to reduce the transmission
rate to zero, but only to a number called the channel capacity . The means by which
this is achieved is called coding , and the process of coding involves an encoder ,as
illustrated in Figure 5.1. The encoder assigns to each of a specified group of source
signals (e.g., bits) a sequence of symbols called a code word suitable for transmission
1
This basically means that each source bit is represented by a bit sequence of odd length.
Search WWH ::




Custom Search