Cryptography Reference
In-Depth Information
11.5 Error-Correcting Codes
The aim of science is not to open the door to infinite wisdom, but to set a
limit to infinite error.
Bertolt Brecht (1898-1956), German Dramatist
— from Section 9 of The Life of Galileo (1939)
We have alreadybeen introduced to the use of non cryptographic codes in
this chapter. Indeed, Shannon is responsible for the birth of coding theory
with his seminal papers, the consequences of which we have studied in the
preceding sections. In this topic we have been concerned largelywith codes
from a cryptographic viewpoint. However, at the outset, on page 6, we did
promise to look at noncryptographic codes, called error-correcting codes , which
are methods for detecting and/or correcting errors in the transmission of data.
Noncryptographic error correction is required in virtually anything that
works with digitallyrepresented data: satellite communications; telephone com-
munications; fax machines; computers; CD players; and so forth. Coding theory
deals with communications over noisychannels. This noise maybe caused by
human error, lightening, electric impulses, thermal noise, deterioration of ma-
chinery, imperfections in the equipment, neighboring channels, and so on. The
goal of error-correcting codes is to encode the data, in a fashion (usuallyinvolv-
ing adding redundancy), so that the original data may be recovered if errors
(but not too manyof them) have occurred.
A simple example where we replace the original message with an encoding
(see pages 433 and 434), which has built-in redundancy, is given as follows.
Suppose that we want to send the letter C, which has binaryrepresentation 10.
If we send a codeword of bitlength 6 as 101010, this is merelythe repetition
of the original message three times. Thus, if an error is introduced so that the
received message is 111010, say, then the receiver can still determine the original
message as the most repeated one, namely, 10. Of course, if too many errors are
introduced, such as 111111 being the received message, then there is no hope
of retrieving the original. Thus, a goal of error-correcting codes is to minimize
the probability that errors will be introduced.
Once we encode a message, we need a mechanism to decode it after it passes
through a noisychannel. For our example, in the above illustration, our decoder
would be the recognition of the repetition of 10 twice, and would spit out the
corrected message with the three repetitions of it for the intended user. In
other words, the decoder recognizes that the nearest codeword is the one with
the second 1 replaced bya 0 to make three repetitions of the alreadytwice-
repeated 01. If however, it received 111111, it would spit out the same since it
has no basis upon which to determine if errors occurred. In this case, we are
talking about binary codewords of bitlength 6. In terms of the definition given
on the aforementioned pages,
f ( C ) = 101010, with
|
f ( C )
|
=6 .
This example is depicted in Diagram 11.1.
Search WWH ::




Custom Search