Cryptography Reference
In-Depth Information
Harrison Harry Hanihan
0
1
Orville Baskethands
0
1
Robert Liddlekopf
Figure 7.4: The Huffman tree used to hide information in the choice
of the Blogs outfielder who makes a particular play.
The weightings aren't used randomly. If the choice of a particular
phrase is going to encode information, then there must be a one-to-
one connection between incoming bits and the output. The Huff-
man trees discussed in “Choosing the Next Letter” on page 92 are the
best way to map a weighted selection of choices to incoming bits.
The weightings are used to build a tree. Figure 7.4 shows the tree
built to hide information in the choice of the Blogs outfielder who
makes a play. The same proof that shows that Huffman trees are the
optimal way to compress a file shows that this is the best way to en-
code information.
Naturally, the Huffman tree only approximates the desired sta-
tistical outcome and the level of the approximation is limited to the
powers of one-half. Figure 7.4 shows how badly the Huffman tree can
often be off the mark. One of the choices encodes one bit of informa-
tion and the other two each encode two. This means, effectively, that
the first choice will be made 50% of the time and the other two will
be chosen 25% of the time.
The level of inaccuracy decreases as more and more choices are
available. For instance, it should be obvious that if a variable can be
converted into 2 i different choices each with equal weighting, then
the approximation will be perfect. This will also be the case if all of
the weightings are powers of 2 and the total adds up to a power of 2—
for instance:
{
1
,
1
,
2
,
4
,
2
,
2
,
4
}
.
Search WWH ::




Custom Search