Cryptography Reference
In-Depth Information
or on anything else related to
except the probabilities associated with all values.
That is why we said a few lines earlier that the entropy is defined for a random
variable or a probability distribution. If we want to express the entropy of a random
variable X , then we can use the following formula:
X
H ( X )=
P X ( x )log 2 P X ( x )
(5.1)
x∈X : P X ( x ) =0
Alternatively speaking, H ( X )= E [
log 2 P X ( X )] = E [ g ( X )] with g (
·
)=
).
There are some intuitive properties of the entropy (as a measure of uncer-
tainty). For example, if we add some values to a random variable that are impossible
(i.e., their probability is zero), then the entropy does not change. This property can
be formally expressed as follows:
log 2 P X (
·
H ([ p 1 ,...,p n ]) = H ([ p 1 ,...,p n , 0])
Furthermore, a situation involving a number of alternatives is most uncertain
if all possibilities are equally likely. This basically means that
0
H ([ p 1 ,...,p n ])
log 2 n
with equality on the left side if and only if one value occurs with probability one
(and all other values occur with probability zero), and with equality on the right side
if and only if all values are equally likely (i.e., p i =1 /n ). Similarly, we have
0
H ( X )
log 2 |X|
with the same conditions for equality on either side as mentioned earlier. In particu-
lar, if X is uniformly distributed, then we have H ( X )=log 2 |X|
, and this equation
is often referred to as Hartley's formula .
If we increase the number of alternatives, then we also increase the entropy of
the corresponding probability distribution. This property can be formally expressed
as follows:
H 1
<H 1
n ,..., 1
1
n +1
n +1 ,...,
n
Search WWH ::




Custom Search