Cryptography Reference
In-Depth Information
computer search, Rosnes and Takeshita provided, for turbo codes that use 8 and
16-state constituent codes, a very useful list for the best (in term of minimum
distance) QPPs for a wide range of N ( 32
512 and N = 1024 ) [7.38].
After discussing a necessary and sucient condition for verifying whether a
quadratic polynomial is a PP modulo N, and providing some examples, let us
discuss some properties of QPPs. It is well known that a linear polynomial,
l ( x )= l 0 + l 1 x (or simply l ( x )= l 1 x ), is guaranteed to be a PP modulo N if
l 1 is chosen relatively prime to N (i.e., gcd ( l 1 ,N )=1 ). Consequently, linear
permutation polynomials (LPP) always exist for any N , but unfortunately this
is not true for QPPs.
N
For example, there are no QPP for N =11 and for
2
4096 there are only 1190 values of N that have QPPs (roughly 29% )
[7.44]. A theorem in [7.44] guarantees the existence of QPP for all N =8 i ,
i
N
Z + (i.e., multiples of a typical computer byte size of 8). It is shown in [7.44]
that some QPP degenerate to LPP (i.e., there exists an LPP that generates the
same permutation over the ring
Z N ). A QPP is called reducible if it degenerates
to an LPP; otherwise it is called irreducible. For instance, example 1 in Case I of
sub-section A could be simply reduced to f ( x )= l +3 x modulo 8 to obtain the
same permutation. In [7.38], it is shown that some reducible QPPs can achieve
better minimum distances than irreducible QPP for some short to medium
interleavers. However, for large intereavers, the class of irreducible QPPs are
better (in term of minimum distance) than the class of LPP; and if not, that
particular length will not have any good minimum distance [7.38].
7.4
Decoding turbo codes
7.4.1 Turbo decoding
Decoding a binary turbo code is based on the schematic diagram of Figure 7.12.
The loop allows each decoder to take advantage of all the information avail-
able. The values considered at each node of the layout are LLRs, the decoding
operations being performed in the logarithmic domain.
The LLR at the output of a decoder of systematic codes can be seen as
the sum of two terms: the intrinsic information, coming from the transmission
channel, and the extrinsic information, which this decoder adds to the former
to perform its correction operation. As the intrinsic information is used by the
two decoders (at different instants), it is the extrinsic information produced by
each of the decoders that must be transmitted to the other as new information,
to ensure joint convergence. Section 7.4.2 details the operations performed to
calculate the extrinsic information, by implementing the MAP algorithm or its
simplified Max-Log-MAP version.
Because of latency effects, the exchange of extrinsic information, in a digital
processing circuit, must be implemented via an iterative process: first decoding
Search WWH ::




Custom Search