Cryptography Reference
In-Depth Information
decoding. First, the convergence threshold of the turbo decoder, that is, the
signal to noise ratio from which the turbo decoder can begin to correct most of
the errors, degrades when the dimension of the concatenation increases. Indeed,
the very principle of turbo decoding means considering the elementary codes
one after the other, iteratively. As their redundancy rate decreases when the
dimension of the composite code increases, the first steps in the decoding are
penalized compared to a concatenated code with a simple dimension 2. Then,
the complexity and the latency of the decoder are proportional to the number
of elementary encoders.
7.3
Turbo codes
Fortunately, concerning the above, it is not necessary to carry dimension N
to a high value. By replacing the random permutation Π 2 by a judiciously
elaborated permutation, good performance can be obtained by limiting ourselves
toadimension N =2 . That is the principle of turbo codes.
Figure 7.3 - A binary turbo code with memory ν =3
using identical elementary RSC
encoders (polynomials 15, 13). The natural coding rate of the turbo code, without
puncturing, is 1/3.
Figure 7.3 presents a turbo code in its most classical version [7.14]. The
binary input message, of length k , is encoded in its natural order and in a
permuted order by two RSC encoders called C 1 and C 2 , which can be terminated
or not. In this example, the two elementary encoders are identical (generator
polynomials 15 for the recursivity and 13 for the construction of the redundancy)
but this is not a necessity. The natural coding rate, without puncturing, is 1/3.
To obtain higher rates, redundancy symbols Y 1 and Y 2 are punctured. Another
way to have higher rates is to adopt m -binary codes (see 7.5.2).
Search WWH ::




Custom Search