Cryptography Reference
In-Depth Information
Figure 1.6 - Possible behaviours for a coding/decoding scheme on a Gaussian channel
( k = 1504 bits, R =1 / 2 ).
The search for the ideal encoder/decoder pair, since Shannon's work, has
always had to face this dilemma: good convergence versus high MHD. Excellent
algebraic codes like BCH or Reed-Solomon codes were fairly rapidly elaborated
in the history of the correction coding (see Chapter 4). MHDs are high (and even
sometimes optimal) but it is not always easy to implement soft input decoding.
In addition, algebraic codes are generally "sized" for a specific length of codeword
and coding rate, which limits their fields of application. In spite of this, algebraic
codes are of great use in applications that require very low error rates, especially
mass memories and/or when soft information is not available.
It is only recently, with the introduction of iterative probabilistic decoding
(turbo decoding), that we have been able to obtain ecient error correction
close to the theoretical limit. And it is even more recently that we have been
able to obtain sucient MHDs to avoid a change in slope that is penalizing for
the performance curve.
It is not easy to find a simple answer to the question posed at the beginning
of this section. Performance is, of course, the main criterion: for a given error
rate, counted either in BER or in PER and for a fixed coding rate, a good code is
first the one whose decoder offers a good error correction capability close to the
corresponding theoretical limit. One preliminary condition of this is obviously
the existence of a decoding algorithm (random codes do not have a decoder, for
Search WWH ::




Custom Search