Cryptography Reference
In-Depth Information
of its distances, and that of a linear code on the distribution of its weights,
we can undertake to build a linear code having a weight distribution close to
that of random coding. This idea has not been much exploited directly, but we
can interpret turbo codes as being a first implementation. Before returning to
the design of coding procedures, we will make an interesting remark concerning
codes that imitate random coding.
The probability of obtaining a codeword of length n and weight w by ran-
domly drawing the bits 0 and 1 each with a probability of 1/2, independently
of each other, is given by (3.7). Drawing a codeword 2 k
times, we obtain an
average number of words of weight w equal to:
N w,k = n
w
2 ( n−k )
Assuming that n , k and w are large, we can express n
w
approximately,
using the Stirling formula:
n
w
n n +1 / 2
w w +1 / 2 ( n
1
2 π
w ) n−w +1 / 2
The minimal weight obtained on average, that is w min , is the largest number
such that N w min ,k has value 1 for the best integer approximation. The number
N w min ,k is therefore small. It will be sucient for us to put it equal to a constant
λ close to 1, which it will not be necessary to detail further because it will be
eliminated from the calculation. We must therefore have:
n n +1 / 2
1
2 π
2 ( n−k )
= λ
w w min +1 / 2
min
w min ) n−w min +1 / 2
( n
Taking the base 2 logarithms and ignoring the constant in relation to n , k and
w min that tend towards infinity, we obtain:
k
n
1
H 2 ( w min /n )
where H 2 (
·
) is the binary entropy function:
H 2 ( x )=
x log 2 ( x )
(1
x )log 2 (1
x )
for 0 <x< 1
=0
for x =0 or x =1
The weight w min is the average minimal weight of a code obtained by drawing
at random. Among the set of all the linear codes thus obtained (weights and
distances therefore being merged), there is at least one whose minimum distance
d is higher than or equal to the average weight w min ,sothatwehave:
k
n
1
H 2 ( d/n )
(3.9)
Search WWH ::




Custom Search