Cryptography Reference
In-Depth Information
2.1.1 Randomised algorithms
All our algorithms may be randomised , in the sense that they have access to a random
number generator. A deterministic algorithm should terminate after a finite number of
steps, but a randomised algorithm can run forever if an infinite sequence of “unlucky”
random choices is made. 1 Also, a randomised algorithm may output an incorrect answer
for some choices of randomness. A Las Vegas algorithm is a randomised algorithm which,
if it terminates, 2 outputs a correct solution to the problem. A randomised algorithm for a
decision problem is a Monte Carlo algorithm if it always terminates, and if the output
is “yes” then it is correct and if the output is “no” then it is correct with “noticeable”
probability (see the next section for a formal definition of noticeable success probability).
An example of a Las Vegas algorithm is choosing a random quadratic non-residue
modulo p by choosing random integers modulo p and computing the Legendre symbol
(see Exercise 2.4.6 in Section 2.4 ); the algorithm could be extremely unlucky forever. Of
course, there is a deterministic algorithm for this problem, but its complexity is worse than
the randomised algorithm. An example of a Monte Carlo algorithm is testing primality of
an integer N using the Miller-Rabin test (see Section 12.1.2 ). Many of the algorithms in
the topic are randomised Monte Carlo or Las Vegas algorithms. We will often omit the
words “Las Vegas” or “Monte Carlo”.
Deterministic algorithms have a well-defined running time on any given problem
instance. For a randomised algorithm the running time for a fixed instance of the problem
is not necessarily well-defined. Instead, one considers the expected value of the running
time over all choices of the randomness. We usually consider worst-case complexity .The
worst-case complexity for input size n is the maximum, over all problem instances of size
n , of the expected running time of the algorithm. As always, when considering asymptotic
complexity it is necessary that the computational problem has a countably infinite number
of problem instances.
A randomised algorithm is expected polynomial-time if the worst case over all problem
instances of size n bits of the expected value of the running time is O ( n c )forsome
c
∈ R > 0 . (An expected polynomial-time algorithm can run longer than polynomial-time if
it makes many “unlucky” choices.) A randomised algorithm is expected exponential-time
(respectively, expected subexponential-time ) if there exists c
∈ R > 1 (respectively, for all
c
∈ R > 1 ) such that the expected value of the running time on problem instances of size n
bits is O ( c n ).
One can also consider average-case complexity , which is the average, over all problem
instances of size n , of the expected running time of the algorithm. Equivalently, the average-
case complexity is the expected number of bit operations of the algorithm where the
1
In algorithmic number theory it is traditional to allow algorithms that do not necessarily terminate, whereas in cryptography
it is traditional to consider algorithms whose running time is bounded (typically by a polynomial in the input size). Indeed, in
security reductions it is crucial that an adversary (i.e. randomised algorithm) always terminates. Hence, some of the definitions
in this section (e.g., Las Vegas algorithms) mainly arise in the algorithmic number theory literature.
2
An alternative definition is that a Las Vegas algorithm has finite expected running time, and outputs either a correct result or
the failure symbol .
Search WWH ::




Custom Search