Cryptography Reference
In-Depth Information
In this setting, the secret key K must still be transmitted in a secure way, so we
need a secure channel. To summarize, in order to transmit a message securely, we first
need to set up a key and transmit it securely. Is this a vicious circle? It is not for two
reasons. Firstly, the key can be a very short piece of information and it can be easier
to protect it than to protect the message. We can afford using an expensive channel in
order to send a short key securely. Later on, we use an inexpensive channel in order
to send long messages securely thanks to encryption. Secondly, even if the key is long
(which is the case of the Vernam cipher), it still makes sense to use an expensive
channel to transmit it securely. The secure channel may not be available all the time, or
may have longer transmission delays (for instance, if this channel consists of shipping
the key by airplane from Washington to Moscow). We can use it when available by
anticipating that we will have to transmit a confidential document in the future. Later on,
we can use any available channel with fast transmission. This makes the secure channel
virtually available. We summarize this paradigm by saying that using an expensive,
slow, hardly available, but confidential channel, we can transform any other channel
into a confidential one by using the Shannon model of encryption.
1.3.2
Entropy
Given a random variable X , we define the entropy by
H ( X )
=−
Pr[ X
=
x ]log 2 Pr[ X
=
x ]
.
x
The joint entropy H ( X
,
Y ) of two random variables is defined as the entropy of the
joint variable Z
=
( X
,
Y ), i.e.
H ( X
,
Y )
=−
Pr[ X
=
x
,
Y
=
y ]log 2 Pr[ X
=
x
,
Y
=
y ]
.
x , y
=
,
=
=
=
×
Note that X and Y are independent if and only if Pr[ X
x
Y
y ]
Pr[ X
x ]
Pr[ Y
=
y ] for any x and y . We further define the conditional entropy H ( X
|
Y )as
H ( X
|
Y )
=
H ( X
,
Y )
H ( Y ), i.e.
H ( X
|
Y )
=−
Pr[ X
=
x
,
Y
=
y ]log 2 Pr[ X
=
x
|
Y
=
y ]
.
x
,
y
Here are some basic facts on entropy. 17
Theorem 1.1. For any distribution, we have
H ( X
,
Y )
H ( X ) with equality if and only if Y can be written f ( X )
H ( X
,
Y )
H ( X )
+
H ( Y ) with equality if and only if X and Y are independent;
17
For more information, see the textbook by Cover and Thomas (Ref. [52]).
Search WWH ::




Custom Search