Cryptography Reference
In-Depth Information
(
,
)
let
P
be the probability that a random
r
-bit integer that has passed the Miller-
Rabin test with
k
random bases turns out to be composite. Using the Prime Number
Theorem we see that, for example, if we choose a random odd 1024-bit integer, we
have a probability of about 1
r
k
355 of choosing a prime. Suppose, only for the sake of
this argument, that the probability that a composite number passes the test with one
base is approximately 1
/
/
4. Then it would be completely erroneous to assume that
4
for it would be much more likely to have the “bad luck” of choosing a composite that
passes the test with probability close to 1
P
(
1024
,
1
)
≈
1
/
4. Indeed, it is clear that, with these hypotheses,
P
(
1024
,
1
)>
1
/
/
4 than to have the “good luck” of choosing
a prime, with probability around 1
355.
If we denote by
X
the event that
n
is composite,
X
the event that
n
is prime, and
Y
k
the event that Miller-Rabin with
k
bases declares
n
to be prime, then we have:
Proposition 6.1
/
4
−
k
(
X
P
(
r
,
k
)
≤
/
Pr
)
.
(
X
Proof
Using the inequality Pr
(
Y
k
)
≥
pr
)
, Bayes' formula, and Theorem 6.9
4
−
k
,wehave:
(
Y
k
|
)
≤
which tells us that Pr
X
Pr
(
Y
k
|
X
)
Pr
(
X
)
Pr
(
Y
k
|
X
)
Pr
(
Y
k
|
X
)
4
−
k
(
X
P
(
r
,
k
)
=
Pr
(
X
|
Y
k
)
=
≤
≤
≤
/
Pr
).
(
X
Pr
(
Y
k
)
Pr
(
Y
k
)
Pr
)
Thuswe see that the probability of error of theMiller-Rabin based
RandomPrime
algorithm might increase with the size
r
of
n
because the term 4
−
k
is multiplied by
a factor equal to the inverse of the probability that a random odd integer of size
r
is prime. Since, by the PNT, Pr
(
X
ln 2
r
, this factor would be approximately
)
≈
2
/
equal to
2. If the error probability did indeed increase with the size of
n
,
then it would also be necessary to increase the number of bases accordingly and this
would make
RandomPrime
less efficient (although still polynomial). However, this
does not take into account the fact that, as we have seen in the experiments above,
Pr
(
r
ln 2
)/
is usually much smaller than 4
−
k
and, moreover, it decreases as
r
increases.
In fact, Burthe proved in [44] that
P
(
Y
k
|
X
)
4
−
k
and, for large integers, the value of
(
r
,
k
)<
, with
k
fixed, also
decreases as
r
gets larger. For example, it it also shown in [44] that
P
P
(
r
,
k
)
is actually much smaller than this bound because
P
(
r
,
k
)
4
−
28
so that a random 500-bit integer that has passed Miller-Rabin with just one base has
a very high probability of being prime. This probability might not be considered
large enough, but from [44] it also follows that
P
(
500
,
1
)<
10
−
50
, which can be
(
500
,
16
)<
considered negligible in the practical sense.
4
From a more theoretical point of view, we note that, if we take, for example,
r
5
=
=
(
(
))
(
)
k
r
O
len
n
, then the algorithm runs in expected polynomial time
O
(since we expect to make
O
(
r
)
calls to Miller-Rabin, and for each of themwe would
r
3
perform
O
(
r
)
exponentiations each of which requires time
O
(
))
. Moreover, since
4
Regarding this probability we can mention the following quote from Borel cited in [118, p. 126]:
Un phénomène dont la probabilité est
10
−
50
ne se produira donc jamais, ou du moins ne sera
jamais observé.