Cryptography Reference
In-Depth Information
1
0
e
0
1
o
a
Figure 6.2: A small Huffman tree built to hide bits in the choice of a
new letter. Here, the letter “a” encodes “10”, the letter “e” encodes “0”
and the letter “o” encodes “11”.
acter. This value is weighted by the probabilities.
There are problems, of course, with this scheme. This solution is
the best way to hide the information so that it mimics the source text
S
for the same reason that Huffman codes are the most efficient way
to construct tree-like compression schemes. The same proof that
showsthisworksinreverse.
Section 6.3.1 shows a
more accurate
approximation.
Butevenifitisthebest,itfallsshortofbeingperfect.Inthesmall
example in Figure 6.2, the letter “e” is chosen if the next bit to be
hidden is “0”, while either “a” or “o” will be hidden if the next bit is
“1”.Ifthedatatobehiddenispurelyrandom,then“e”willbechosen
50% of the time while “a” or “o” will be chosen the other 50% of the
time. This does not mimic the statistics from the source text exactly.
If it did, the letter “e” would be chosen 80% of the time and the other
letters would each be chosen 10% of the time. This inaccuracy exists
because of the binary structure of the Huffman tree and the number
of choices available.
6.3 Implementing the Mimicry
There are two major problems in writing software that will gener-
ate regular
th -order mimicry. The first is acquiring and storing the
statistics. The second is creating a tree structure to do the Huffman-
like coding and decoding. The first problem is something that re-
quires a bit more finesse because there are several different ways to
accomplish the same ends. The second problem is fairly straightfor-
ward.
n
Search WWH ::




Custom Search