Cryptography Reference
In-Depth Information
choose integers of the required size at randomand submit them to a primality test until
a prime is found. The primality test used can be either probabilistic or deterministic
(with the former being the more efficient by far) but the prime-finding algorithm will
be probabilistic anyway because we have to start by choosing random integers. We
will see in Chap. 6 that this allows us to find primes in expected polynomial time,
the reason being essentially that primes are sufficiently dense among the integers so
that if p is the probability that a randomly chosen integer of a given size is prime,
then 1
p is a polynomial function on the size. Since the expected number of trials
to find a prime is equal to 1
/
p by Proposition 2.2 and, since each trial involves a
polynomial-time primality test, the algorithmdoes indeed run in expected polynomial
time.
/
2.3.5 Final Remarks on Complexity
In this section we briefly discuss some aspects of complexity that, although not
required to read the rest of the topic, are relevant for cryptography and are also of
considerable interest from the point of view of theoretical computer science. We
also include some bibliographic references for the reader who wishes additional
information on the subject.
All the notions of complexity discussed so far are based on evaluating the worst
case of the computational problems involved. For example, the idea behind the con-
cept of
-completeness is that an algorithm that can efficiently solve all instances
of the problem, including the most difficult ones, would lead to an efficient algorithm
to solve all the problems in
NP
-complete problems that are easy
to solve most of the time and this is perfectly compatible with the hypothesis that
P = NP
NP
. But there are
NP
, which only implies that there is no efficient algorithm to solve these
problems in the worst case [6]. Public-key cryptography is based, as we shall see, on
the hypothesis that some computational problems are difficult, but problems that are
hard in the worst case but not on average are not good for this purpose and, in fact,
one would like to use problems which are hard for most instances. For example, the
security of RSA is based on the assumption that most integers that are the product of
two randomly chosen large primes are hard to factor. The plausibility of this assump-
tion makes the factorization problem adequate for cryptographic purposes although,
as we have already mentioned, it is not thought to be
NP
-complete. In contrast with
this, even if we assume that
P = NP
, we do not have an a priori guarantee that
an
-complete problem will be hard on average, and hence such a problem may
not be suitable for cryptographic use. On the other hand, we will see that the hard
problems of cryptographic interest are usually random self - reducible (see Definition
7.6) in the sense that any instance can be reduced in polynomial time to a random
instance and hence, if the problem is hard in the worst case, then it is also hard on
average.
These considerations lead to the study of average - case complexity , and it is by no
means obvious how to precisely define what it means for a problem to be “intractable
NP
 
Search WWH ::




Custom Search