Cryptography Reference
In-Depth Information
The detection of errors therefore remains ecient whatever the error proba-
bility on the transmission channel if the number of redundancy symbols ( n
k )
is large enough. The detection of errors is therefore not very sensitive to error
statistics.
When erroneous symbols are detected, the receiver generally asks the source
to send them again. To transmit this re-transmission request, it is then necessary
to have a receiver source link, called a return channel. The data rate on the
return channel being low (a priori, requests for retransmission are short and few
in number), we can always arrange it so that the error probability on this channel
is much lower than the error probability on the transmission channel. Thus, the
performance of a transmission system using error detection and repetition does
not greatly depend on the return channel.
In case of error detection, the emission of the source can be interrupted
to enable the retransmission of the corrupted information. The data rate is
therefore not constant, which can present problems in some cases.
4.3.2 Error correction
Error correction involves looking for the transmitted codeword c given the re-
ceived word r . Two strategies are possible. The first one corresponds to a
received word r at the input of the decoder made up of binary symbols (the
case of a binary symmetric channel) and the second, to a received word r made
up of analogue symbols (the case of a Gaussian channel). In the first case, we
speak of hard input decoding whereas in the second case we speak of soft input
decoding. We will now examine these two types of decoding, already mentioned
in Chapter 1.
Hard decoding
Maximum a posteriori likelihood decoding
For hard decoding the received word r is of the form:
r = c + e
where c and e are words with binary symbols.
Maximum a posteriori likelihood decoding involves looking for the codeword
c such that:
C ( n, k )
Using Bayes' rule and assuming that all the codewords are equiprobable, the
above decision rule can also be written:
Pr
{
c
|
r
}
> Pr
{
c i |
r
}∀
c i
= c
c = c i
Pr ( r
|
c = c i ) > Pr ( r
|
c = c j ) ,
c j
= c i
C ( n, k )
Again taking the example of a binary symmetric channel with error probability
p and denoting d H ( r , c ) the Hamming distance between r and c , the decision
Search WWH ::




Custom Search