Cryptography Reference
In-Depth Information
should be independent from a particular computational model. Consequently, com-
plexity theory has not much in common with benchmark testing as usually used in
the trade press to compare the computational power of different computer systems,
products, and models. Instead, complexity theory is used to determine the compu-
tational resources (e.g., time, space, and randomness) that are needed to compute a
function or solve a problem. The computational resources, in turn, can be determined
exactly or at approximately specifying lower and upper bounds. 1 Alternatively, one
can consider the effects of limiting computational resources on the class of functions
(problems) that can be computed (solved) in the first place.
In complexity theory, there are many functions and problems to look at. For
example, for a positive integer n
, one can always look at the problem of decid-
ing whether n is prime (or composite). This problem is a decision problem and it is
solved by providing a binary answer (i.e., yes or no). 2 An instance of this problem
would be whether n =81is prime (which is arguably wrong, because 81 = 9
N
9).
Consequently, a problem refers to a well-defined and compactly described large class
of instances characterized by some input and output parameters. Examples include
deciding primality (as mentioned earlier), factoring integers, or deciding graph iso-
morphisms. Against this background, it does not make a lot of sense to define the
computational difficulty or complexity of a problem instance. There is always a
trivial algorithm to solve the instance, namely the algorithm that simply outputs
the correct solution. Consequently, the computational difficulty or complexity of a
problem must always refer to a (potentially very) large class of instances. This fact
is very important to understanding complexity theory and its results. Unfortunately,
it also makes things considerably difficult to express and understand.
We said that results from complexity theory should be largely independent
from a particular computational model (refer to Section 6.5 for an overview about
the computational models in use today). Nevertheless, one must still have a model
in mind when one works in theoretical computer science and complexity theory. As
of this writing, the computational model of choice is the Turing machine. 3 Looking
at Turing machines is sufficient, because there is a famous thesis in theoretical com-
puter science—called Church's thesis 4 —that most results from complexity theory
·
1
To prove an upper bound is comparably simple. It suffices to give an algorithm together with an
analysis of its computational complexity. To prove a lower bound is much more involved, because
one must prove the nonexistence of an algorithm that is more efficient than the one one has in mind.
Consequently, it does not come as a big surprise that no significant lower bound has been proven for
the computational complexity of any practically relevant problem.
2
It was recently shown that there are deterministic polynomial-time algorithms to solve this problem
(see Section 3.2.4.3). Consequently, this problem is known to be in
P
(the notion of the complexity
class
P
is introduced later in this chapter).
3
Turing machines are named after Alan M. Turing, who lived from 1912 to 1954.
4
The thesis is named after Alonzo Church, who lived from 1903 to 1995.
Search WWH ::




Custom Search