Cryptography Reference
In-Depth Information
Many of the algorithms and approaches described later in the topic
perform best when they have a perfectly random source of data.
Encrypting a file before applying any of the other approaches is a
good beginning, but it doesn't complete the picture. Sometimes too
much randomness can stick out like a sore thumb. Chapter 17 de-
scribes several algorithms that can flag images with hidden informa-
tion by relying on statistical tests that measure, often indirectly, the
amount of randomness in the noise. A file that seems too random
stands out because the noise generated by many digital cameras isn't
as random as it might seem.
The trick is to use some extra processing to add a bit of statistical
color to the data before it is introduced. Chapters 6 and 7 describe
some solutions. Others involve mixing in the hidden message in a
way that doesn't distort the statistical profile of the data.
The world of cryptography began attempting to produce perfect
white noise during World War II. This is because Claude Shannon-
Claude E. Shannon, a mathematician then working for Bell Labs, de-
veloped the foundations of information theory that offered an ideal
framework for actually measuring information.
Most people who use computers have a rough idea about just
howmuch information there is in a particular file. A word processing
document, for instance, has some overhead and about one byte for
each character- a simple equation that doesn't seem to capture the
essenceoftheproblem.Ifthenumberofbytesinacomputerfileis
an accurate measurement of the information in it, then there would
be no way that a compression program could squeeze files to be a
fraction of the original size. Real estate can't be squeezed and dia-
monds can't be smooshed, but potato chips always seem to come in a
bag filled with air. That's why they're sold by weight not volume. The
success of compression programs like PKZIP or Stuffit means that
measuring a file by the number of bytes is like selling potato chips
by volume.
Compression is
discussed in Chapter 5.
Shannon's method of measuring information “by weight” rests on
probability. He felt a message had plenty information if you couldn't
anticipate the contents, but it had little information if the contents
were easy to predict. A weather forecast in Los Angeles doesn't con-
tain much information because it is often sunny and 72 degrees
Fahrenheit. A weather forecast in the Caribbean during hurricane
season, though, has plenty of potential information about coming
storms that might be steaming in.
Shannon measured information by totaling up the probabilities.
A byte has 8 bits and 256 different possible values between 00000000
and 11111111 in base 2. If all of these possible values occur with the
Search WWH ::




Custom Search