Cryptography Reference
In-Depth Information
The larger n is, the smaller the probability that this rule has an erroneous
result is, and this probability tends towards 0 (assuming that p is kept constant)
when n tends towards infinity, provided that d> 2 np .So d has also to tend
towards infinity.
Still in geometrical terms, the construction of the best possible code can
therefore be interpreted as involving choosing M< 2 n points belonging to S n
in such a way that they are as far away as possible from each other (note that
the inequality M< 2 n implies that the code is necessarily redundant). For a
given value of the error probability p of the channel (still assumed to be binary
symmetric) it is clear that there is a limit to the number M of points that can
be placed in S n while maintaining the distance between these points higher than
2 np .Let M max be this number. The value
log 2 ( M max )
n
C = lim
nā†’āˆž
measures in shannons the greatest quantity of information per symbol that can
be communicated without any errors through the channel, and it happens to
coincide with the capacity of the channel defined in Section 3.1. No explicit
procedure making it possible to determine M max points in S n while maintaining
the distance between these points higher than 2 np is generally known, except in
a few simple, not very useful, cases.
3.1.5 Random coding
Random coding , that is, the construction of a code by randomly choosing its
elements, is a way of choosing M scattered points in the space S n . This method is
optimal for the distribution of distances, when n tends towards infinity. Random
coding enables the points to be, on average, equally distributed in all the n
dimensions of S n and it reaches a mean emission rate equal to the capacity of
the channel. For a code containing M codewords of length n , it means randomly
drawing each bit of a codeword independently of the others with the probability
1/2 that it is 0 or 1, the M codewords that make up the code being drawn in
the same way independently from each other. The probability of a particular
codeword c is P c =2 āˆ’n . We thus obtain codewords whose weights follow a
Bernoulli distribution and the probability of obtaining any codeword of weight
w is given by (3.3) for p =1 / 2 ,thatis:
P w = n
2 āˆ’n
(3.7)
w
The mathematical expectation, or mean, of this weight is n/ 2 and its variance
equals n/ 4 . For very large n , a good approximation of the weight distribution
of the codewords obtained by random coding is a Gaussian distribution.
If
Search WWH ::




Custom Search