Cryptography Reference
In-Depth Information
if
Pr[
X
=
x
]
=
0
for at least n values of x then H
(
X
)
≤
log
2
n with equality if
x
]
are equal to
n
.
and only if all nonzero
Pr[
X
=
1.3.3
Perfect Secrecy
Perfect secrecy
means that the
a posteriori
distribution of the plaintext
X
after we know
the ciphertext
Y
is equal to the
a priori
distribution of the plaintext: the conditional
distribution of
X
given
Y
is equal to the original distribution. Formally, for all
x
and
y
such that Pr[
Y
=
y
]
=
0, we have Pr[
X
=
x
|
Y
=
y
]
=
Pr[
X
=
x
].
Theorem 1.2.
Perfect secrecy is equivalent to H
(
X
|
Y
)
=
H
(
X
)
and to the statistic
independence between X and Y .
Proof.
We h av e
H
(
X
|
Y
)
=
H
(
X
,
Y
)
−
H
(
Y
)
≤
H
(
X
)
with equality if and only if
X
and
Y
are independent. Thus
H
(
X
|
Y
)
=
H
(
X
) is equiv-
alent to the independence of
X
and
Y
.
If we have perfect secrecy, then
=
,
=
Pr[
X
x
Y
y
]
=
=
|
=
=
=
Pr[
X
x
Y
y
]
Pr[
X
x
]
=
Pr[
Y
y
]
for any
x
and
y
, thus
X
and
Y
are independent, thus
H
(
X
|
Y
)
=
H
(
X
).
If now
H
(
X
|
Y
)
=
H
(
X
), and
X
and
Y
are independent, we have
=
,
=
Pr[
X
x
Y
y
]
=
|
=
=
=
=
Pr[
X
x
Y
y
]
Pr[
X
x
]
Pr[
Y
=
y
]
for any
x
and
y
; thus we have perfect secrecy.
Theorem 1.3 (Shannon 1949).
Perfect secrecy implies H
(
K
)
≥
H
(
X
)
.
Proof.
We first prove the intermediate property which holds in all cases:
H
(
Y
)
≥
H
(
X
).
≥
|
First, we have
H
(
Y
)
K
). We notice that the knowledge of
K
gives the same
distribution for
X
and
Y
, thus
H
(
Y
H
(
Y
|
=
|
K
)
H
(
X
K
). But since
X
and
K
are independent,
we obtain
H
(
Y
|
K
)
=
H
(
X
). We thus have
H
(
Y
)
≥
H
(
X
).
We now notice that when
X
is fixed, the knowledge of
K
determines
Y
. Further-
more,
K
and
X
are independent. Thus we have
H
(
Y
,
K
|
X
)
=
H
(
K
). Then we have
H
(
X
,
Y
,
K
)
≥
H
(
X
,
Y
). Thus we have
H
(
K
)
≥
H
(
Y
|
X
). If we have perfect secrecy,
we have
H
(
Y
|
X
)
=
H
(
X
|
Y
)
+
H
(
Y
)
−
H
(
X
)
=
H
(
Y
). Thus we have
H
(
K
)
≥
H
(
Y
).
Hence we obtain
H
(
K
)
≥
H
(
X
).
Search WWH ::
Custom Search