Cryptography Reference
In-Depth Information
on average”. The idea is that to capture the notion of “being efficient on typical
instances”, one should allow an algorithm to take a large amount of time on inputs that
occur with low probability, in such a way that if we have inputs that require increas-
ingly large running time, these inputs should occur with proportionally decreasing
probability. This is the idea underlying the definition of average-case complexity
given by Levin [129] in the 1980s, which was later modified by Impagliazzo [104].
We refer to these papers and also to [9] for the details.
We have mentioned the fact that computational efficiency can be defined inde-
pendently of the computational model used. This is true for classical models such
as Turing machines or Boolean circuits but does not apply to quantum computers .
These are computational devices based on the principles of quantum mechanics and
there is a celebrated result due to Peter Shor [177] that shows that quantum computers
can factor integers (and also solve other related problems) in polynomial time. This
poses a lethal threat to RSA and, in fact, to most public-key cryptosystems currently
in use. But it is not clear whether full-sized quantum computers—which, so far, exist
only as experimental devices with very little capacity—will ever be built and, on
the other hand, there are also public-key cryptosytems based on problems (mainly
lattice-theoretic ones) which have not been shown to be easy for quantum computers.
We say a few more words about this when discussing the current status of integer
factorization in Sect. 6.4.10 .
2.4 The Euclidean Algorithm
Definition 2.18 Given a
,
b
∈ Z
not both zero, the greatest common divisor of a and
b , denoted gcd
is the largest integer that divides both a and b . We say that a
and b are relatively prime if gcd
(
a
,
b
)
(
a
,
b
) =
1.
The computation of the greatest common divisor is an interesting problem for
several reasons. In the first place, this computation is a basic component in many
number-theoretic algorithms, as we will see when implementing them. But it is also
interesting because it is not at all obvious how to compute, for example, the gcd of
two integers that have several hundreds of decimal digits each (as we will see later
on, integers of this size are commonly used in cryptography).
A straightforward procedure to obtain the gcd is to list all the common positive
divisors of a and b . Both lists will be finite and the largest number on both lists
will be gcd
. But it is intuitively clear that this method will not work with large
numbers and we could try a related but more sophisticated method instead, using the
fundamental theorem of arithmetic (Theorem 2.2). From this theorem we can derive
the following method to compute the gcd of two integers a and b which we can
assume, without loss of generality, to be greater than 1. Suppose that a
(
a
,
b
)
= i = 1 p e i
i
= i = 1 p f i , where the primes p i are written in increasing order and we allow
the exponents e i and f i to be zero. Then it is obvious that
and b
 
Search WWH ::




Custom Search