Cryptography Reference
In-Depth Information
with care (e.g., [12]). For example, it is impossible to elaborate on the computa-
tional complexity of a specific function (or algorithm that implements the function).
Instead, one always has to consider an infinite class of functions (or algorithms). This
is unfortunate and sometimes inappropriate, because many concrete cryptographic
systems employ functions that are fixed and for which an asymptotic extension is
not at all obvious. Furthermore, as mentioned earlier, the distinction between effi-
cient (i.e., polynomial-time) algorithms and inefficient (i.e., super-polynomial-time)
algorithms is vague. It sometimes leads to a situation in which a theoretically effi-
cient algorithm is practically so inefficient that it is infeasible to execute it on any
reasonably sized input. To make things worse, complexity theory deals with worst-
case complexity. This concept is questionable in cryptography, where breaking a
system must be hard for almost all problem instances, not just some of them. There
are some results addressing the average-case complexity of problems (e.g., [13]).
Note, however, that in cryptography even average-case complexity results are not
good enough, because problems must be hard for almost all instances. Furthermore,
instead of proving the hardness of finding an exact solution for a computational
problem, one may want to reason that even approximating the exact solution is in-
tractable (again, complexity theory is inappropriate for this kind of reasoning). As
already mentioned at the beginning of Section 6.2, an inherent difficulty of complex-
ity theory is related to the fact that the state of the art in lower bound proofs for the
computational complexity of a problem is poor. From a cryptographic viewpoint, it
would be nice to have (and prove) some nontrivial lower bounds (be they polyno-
mial or super-polynomial) for the complexity of breaking a concrete cryptographic
system. Unfortunately, we are far away from that. Finally, we noted that all compu-
tational models in use today are equivalent from a complexity-theoretic viewpoint.
The discussion about the right model of computation, however, was reopened when
Shor showed that certain problems can be solved in polynomial time on a quantum
computer and Adleman showed that the same may be true for a DNA computer (see
Section 6.5). A lot of research is currently being done (and large amounts of money
are being spent) in quantum and DNA computing; hence, it will be interesting to see
how these topics advance in the future.
References
[1]
Garey, M.R., and D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-
Completeness. W. H. Freeman & Co., New York, 1979.
[2]
Papadimitriou, C.H., Computational Complexity . Addison-Wesley, Reading, MA, 1993.
[3]
Hopcroft, J.E., R. Motwani, and J.D. Ullman, Introduction to Automata Theory, Languages, and
Computation, 2nd edition. Addison-Wesley, Reading, MA, 2001.
[4]
Menezes, A., P. van Oorschot, and S. Vanstone, Handbook of Applied Cryptography. CRC Press,
Boca Raton, FL, 1996.
Search WWH ::




Custom Search