Digital Signal Processing Reference
In-Depth Information
A special case of entropy called
min-entropy
describes the worst case uncertainty,
i.e. it evaluates the best chances an adversary has in guessing a secret value chosen
from a known distribution. The min-entropy of a random variable
X
is denoted as
H
∞
(
de f
=
−
)
and defined as
H
∞
(
)
[
=
]
. A uniformly distributed
n
-bit variable has maximal min-entropy equal to
n
, while a deterministic value has
min-entropy zero. To describe the noise, it is assumed that the measured fuzzy secret
X
is a bit vector of length
n
. Two distinct observations of the fuzzy secret possibly
differ in a number of bit locations due to noise. The
amplitude
of the noise affecting
X
can be quantified by the maximum number
X
X
log
2
max
x
Pr
X
x
δ
(
)
X
of differing bits between two
measurements.
4.2
Generating Cryptographic Keys from
Fuzzy Secrets
by Means of Error Correction
Mostly, a physical variable cannot be directly used. It should first be measured as
accurately as possible and afterwards quantized into a discrete value which can
be used by a digital algorithm. The quantization process translates physical noise
into bit errors of a bit vector. Also, due to rounding, part of the information in the
measured value is already lost. A careful choice for quantization should be made to
optimize these parameters given the implementation constraints.
After measurement and quantization, one is left with a possibly non-uniform
and unreliable bit string
X
n
. By performing multiple measurements, the
∈{
0
,
1
}
aforementioned parameters
H
∞
(
can be estimated. The next step in the
transformation is dealing with the noise which can introduce up to
X
)
and
δ
(
X
)
bit errors.
Error correction techniques are frequently used in channel coding theory. A process
correcting block codes in order to get rid of bit errors in a measurement of
X
.The
technique works in two phases:
δ
(
X
)
1. In the generation phase,
X
is measured and a codeword
C
is randomly selected
from an
-block code, with
n
the code length,
k
the code dimension and
t
the error correcting capability. The binary offset between
X
and
C
ia calculated
as
W
[
n
,
k
,
t
]
=
X
⊕
C
and
W
is made publicly available, e.g. it is published in an online
database.
2. In the reproduction phase,
X
is measured which can differ from
X
in up to
δ
(
bit positions. Using the publicly available
W
, one can calculate
C
=
X
)
X
⊕
W
, which differs from
C
in at most
δ
(
X
)
bits. If the used code is chosen
appropriately such that
t
≥
δ
(
X
)
, the decoding algorithm will succeed to compute
C
)
C
. In that case the same
X
as in the generation phase can be
reconstructed as
X
= Decode(
=
C
⊕
W
.