Cryptography Reference
In-Depth Information
The mutual information
I
(
X
;
Y
) quantifies the amount of information between
two variables
X
and
Y
. It is defined as
I
(
X
;
Y
)=
H
(
X
)
−
H
(
X
|
Y
)
.
Mutual information is in fact a special case of the Kullback-Leibler (KL) diver-
gence [15]. This divergence measures the dissimilarity between two distributions.
Let
f
and
g
be two pdf of a random variable
X
. The KL divergence, also called
relative entropy, is then defined as
g
)=
x
f
(
x
)log
f
(
x
)
D
KL
(
f
g
(
x
)
.
The mutual information can then be described as
I
(
X
;
Y
)=
D
KL
(
f
(
x, y
)
f
(
x
)
f
(
y
))
.
2.2
Generalized Mutual Information
Let
X
be a discrete random variable as previously defined. The Renyi entropy
[25] of order
α
is defined as
H
α
(
X
)=
−α
log
x
f
(
x
)
α
1
for
α
≥
0
,α
=1
1
−
x
f
(
x
)log
f
(
x
)
for
α
=1
.
The entropy of Shannon corresponds to
H
1
(
X
). With the previous definition of
Renyi entropy, we can introduce the quantity
I
α
(
X
;
Y
)=
H
α
(
X
)+
H
α
(
Y
)
−
H
α
(
X, Y
)
.
The quantity
I
α
has the following property:
I
α
≥
0
if and only if
α
=0or1
.
The value
I
α
only corresponds to the classical definition of mutual information
in these two cases. However in [23, Basic Theorem, Ch. 3], the authors consider
the case
α
= 2. Using the collision entropy
H
2
,theycallthequantity
I
2
(
X
;
Y
)
Generalized Mutual Information (GMI) where either the random variable
X
or
Y
is uniformly distributed. In this case, the GMI and the classical mutual infor-
mation are both strictly positive and measure both the independence between
two variables. The GMI is particularly interesting as there is a more ecient
method of estimation based on kernel estimators (Sec. 4.3) [22].
3 Classical Side-Channel Distinguishers
3.1
Differential Side-Channel Model
Let
K
be a random variable representing a part of the secret. Let
X
be a ran-
dom variable representing a part of the input, or output, of the cryptographic
Search WWH ::
Custom Search