Cryptography Reference
In-Depth Information
containing 240 bits in the systematic part, for example, would mean considering
as many codewords as atoms in the visible universe (10 80 ) . In spite of this, for
most of the codes known, non-exhaustive decoding methods have been imagined,
enabling us to get very close to the optimal result of the ML method.
1.4
Hard output decoding and soft output decod-
ing
When the output of the decoder is not directly transmitted to a recipient but
must be used by another processor whose performance is improved thanks
to weighted inputs, this upstream decoder can be required to elaborate such
weighted values. We thus distinguish the hard output when the decoder provides
logical 0s and 1s from the soft output . In the latter case, the decoder accompa-
nies its binary decisions with reliability measures or weights. The output scale
of weighted values is generally the same as the input scale [
V max ,V max ] .
For the extended Hamming code decoder, it is relatively easy to build
weighted decisions. When the decoder has calculated the sixteen scalar prod-
ucts, it lists them in decreasing order. In the first position, we find the scalar
product of the most likely codeword, that is, the one that decides the signs of
the weighted decisions at the output. Then, for each of the four bits of the sys-
tematic part, the decoder looks for the highest scalar product that corresponds
to a competitor codeword in which the information bit in question is opposite
that of the binary decision. The weight associated with this binary decision is
then the difference between the maximum scalar product and the scalar product
corresponding to the competitor word. A supplementary division by 2 puts the
weighted output on the input scale. This process is optimal for an AWGN per-
turbation. Taking the example of Figure 1.4 (which is not typical of an AWGN)
again, the weight associated with the decisions on the first three bits would be
identical and equal to (4.3 - 3.7)/2 = 0.3.
From a historical point of view, the first decoding methods were of the hard
input and output type. It was the Viterbi algorithm, detailed in Chapter 5,
that popularized the idea of soft input decoding. Then turbo codes, which are
decoded by repeated processing and require weighted values at every level of
this processing, made soft input and output decoders popular. The generic
abbreviation used to qualify these decoders is SISO for Soft-Input/Soft-Output .
1.5
The performance measure
The performance of an encoder/decoder pair is first judged in terms of residual
errors at the output of the decoder, when we have fixed a specific evaluation
framework: type of perturbation, length of message, rate of redundancy or cod-
ing rate, etc. Other aspects, like the complexity of the decoding, the latencies
Search WWH ::




Custom Search