Cryptography Reference
In-Depth Information
be implemented in all systems, in particular in point to multipoint links (e.g.
television broadcasting).
The codes are ideally ecient only if their codewords are long, in the sense
that the error probability can be made arbitrarily small only if the length of
these codewords tends towards infinity. In addition, a good code must keep an
emission or coding rate R = k/n non-null when the number k of information bits
tends towards infinity. That an error-free communication is effectively possible
asymptotically for a non-null emission rate is a major result of information the-
ory, called the fundamental theorem of channel coding , which preceded attempts
to construct practical codes, thus of finite length. This theorem was a powerful
incentive in the search for ever more ecient new codes. Moreover, it presented
engineers with a challenge, insofar as the proof of the theorem was based on
random coding , whose decoding is far too complex to be envisaged in practice.
Although the mathematical proof of the fundamental theorem in its most
general form contains fairly dicult mathematics, we believe that it can be easily
understood with the help of the law of large numbers . This law simply says that
experimental realizations have frequencies, defined as the ratio of the number
of occurrences noted to the total number of attempts, which tend towards the
probabilities of the corresponding events when the number of attempts tend
towards infinity. Let us consider, for example, the game of heads and tails.
With an "honest" coin, after 10000 throws we could theoretically arrive at the
sequence consisting exclusively of all heads (or all tails), but with a probability
that is only 2 10000
10 3010 (in comparison, one second represents about 10 18
of the time that has elapsed since the creation of the universe). In stark contrast,
the probability that the frequency of the heads (or tails) is close to the mean
1/2 (belonging for example to the interval 0,47-0,53) is in the neighbourhood
of 1. In a similar way, an error configuration with a weight close to np when
n symbols are emitted on a binary symmetric channel of error probability p is
very likely, on condition that the message sent is suciently long.
3.1.4 Geometrical interpretation
Consider the finite space S n of the codewords of n bits having the minimum
Hamming distance d .Itcontains 2 n elements that are said to be its points.
In geometrical terms, saying that the number of errors is close to np with high
probability means that the received word is represented by a point that, with
high propability it is very close to the surface of a hypersphere with n dimensions
in S n , centred on the emitted word and whose radius is equal to the expected
mean number of errors np . If the minimum distance d of the code is higher than
twice this number, the point on the surface of this hypersphere is closer to the
word effectively emitted than to any other codeword and therefore identifies it
without ambiguity. The optimal decoding rule, which was presented in Chapter
1, can therefore be stated thus:
" Choose the codeword closest to the received word "
Search WWH ::




Custom Search