Cryptography Reference
In-Depth Information
non-correctable error detector . Finally, if c' contains three 1s, the decoder will
find a single codeword at a distance of 1 and it will propose this codeword as
the most probable solution, but it will be erroneous.
The error correction capability of the extended Hamming code is therefore
t =1 errors. More generally, the error correction capability of a code with a
minimum distance d min is:
t = d min
1
(1.5)
2
Note that the correction capability of the code given as the example in this
introduction is not decreased if we remove one (any) of the redundancy symbols.
The MHD passes from 4 to 3 but the correction capability is still of one error.
This shortened version is in fact the original Hamming code, the first correcting
code in the history of information theory (1948).
In a given family of codes, we describe a particular version of it by the
shortcut ( n , k , d min ) where n and k are the lengths of the codewords and
of the source messages, respectively. Up to now, we have thus just defined
two Hamming codes denoted (8, 4, 4) and (7, 4, 3). The second seems more
interesting as it offers the same error correction capability ( t =1 )witha τ =
n−k
k redundancy rate of 0.75, instead of 1 for the first one. However, the code
(7, 4, 3) cannot play the role of an error detector: if the received word contains
two errors, the decoder will decide in favour of the single, erroneous codeword
that is to be found at a Hamming distance of 1.
Rather than redundancy rate, we usually prefer to use the notion of coding
rate, denoted R , and defined by:
k
n
1
1+ τ
R =
=
(1.6)
The product Rd min will appear in the sequel as an essential figure of merit
vis-à-vis a perturbation caused by additive noise with a Gaussian distribution.
1.3
Hard input decoding and soft input decoding
We continue this presentation of the basic principles of coding and decoding by
using the example of the extended Hamming code.
The decoding principle defined by (1.4) assumes that the received codeword
c' is composed of binary values, that is, the transmission of data is carried out
according to the law of perturbation given by the diagram of Figure 1.2. A
0 becomes a 1 and vice-versa, with a probability p and the binary values are
correctly received with the complementary probability 1 - p . Such a transmission
channel is said to be a binary symmetric channel and the decoder performs
what we call hard input decoding . In certain communications systems (optical
fibres, switched networks, etc.) and in most storage hardware, the decoders can
effectively exploit only binary information.
Search WWH ::




Custom Search