Cryptography Reference
In-Depth Information
Gaussian channel when the decoder is soft input. This is a major parameter
for communication system designers. For types of channel other than Gaussian
channels (Rayleigh, Rice, etc.), the asymptotic gain is always higher than what
is approximately given by (1.19).
In Figure 1.5, the soft input decoding of the (8, 4, 4) Hamming code gives
the best result, with an observed asymptotic gain of the order of 2.4 dB, in
accordance with relation (1.18) that is more precise than (1.19). The (7, 4, 3)
code is slightly less ecient since the product Rd min is 12 / 7 instead of 2 for
the (8, 4, 4) code. On the other hand, hard input decoding is unfavourable to
the extended code as it does not offer greater correction capability in spite of
a higher redundancy rate. This example is atypical: in the very large majority
of practical cases, the hierarchy of codes that can be established based on their
performance on a Gaussian channel, with soft input decoding, is respected for
other types of channels.
1.6
What is a good code?
Figure 1.6 represents three possible behaviours for an error correcting code and
its associated decoder, on a Gaussian channel. To be concrete, the information
block is assumed to be length k = 1504 bits (188 bytes, a typical length for
MPEG l compression) and the coding rate 1/2.
Curve 1 corresponds to the ideal system. There are in fact limits to the cor-
rection capacity of any code. These limits, whose first values were established
by Shannon (1947-48) and which have been refined since then for practical situ-
ations, are said to be impassable. They depend on the type of noise, on the size
of codeword and on the rate. Their main values are given in Chapter 3.
Curve 2 describes a behaviour having what we call good convergence but with
mediocre MHD. Good convergence means that the error rate greatly decreases
close to the theoretical limit (this region of great decrease is called the waterfall )
but the MHD is not sucient to maintain a steep slope down to very low error
rates. The asymptotic gain, approximated by (1.19), is reached at a binary error
rate of the order of 10 5 in this example. Beyond that, the curve remains parallel
to the curve of error rates without coding: P e =
2 erfc E N 0
1
. The asymptotic gain
is here of the order of 7.5 dB. This kind of behaviour, which was not encountered
before the 1990s, is typical of coding systems implementing an iterative decoding
technique (turbo codes, LDPC, etc.), when the MHD is not very high.
Curve3showsperformancewithmediocreconvergenceandhighasymptotic
gain. A typical example of this is the concatenation of a Reed-Solomon code and
of a convolutional code. Whereas the MHD can be very large (around 100, for
example), the decoder can benefit only relatively far from the theoretical limit.
It is therefore not the quality of the code that is in question but the decoder,
which cannot exploit all the information available at reception.
Search WWH ::




Custom Search