Cryptography Reference
In-Depth Information
3.2 Algorithm Theory
Just as this is not a topic on mathematics, itisalsonota topic on the algorithm theory.Hence,we will not require
the reader to have a great deal of understanding of the methods used to determine the exact running time and
storage requirements of any of the algorithms we will be discussing. However, it is important to have a decent
understanding of how the various algorithms compare to each other.
Algorithmic running time and storage space are often written in terms of order, also known as “Big-Oh”
notation, like O (l) and O ( n 2 ), where n is usually our input size. For an algorithm to have a worst-case running
time of, say, O( n ), means that, roughly speaking, the algorithm will terminate within some fixed multiple of n
steps (i.e., proportional to the size of the input). Naturally, if an algorithm has, say, O( n 2 ) storage requirements
in the worst-case scenario, it will require, at most, some fixed multiple of n 2 (i.e., proportional to the square of
the size of the input).
For example, suppose we devised a method for testing to see if a number is even by dividing it by 2, using
the long-division method everyone learned in elementary school. For the sake of this, and other algorithms in
this chapter, we let n be the size of the number in terms of digits. For example, 12345 has length 5 in decimal, or
length 14 in binary; the two numbers will always differ by about 1/log 2 ≈ 3.322, because of using logarithms 1
to convert bases; thus, it doesn't affect our running time to use one or the other, since an order expressed in one
will always be a constant multiple of the other, and hence have the same order in Big-Oh notation. In this topic,
we typically use key size in terms of the number of bits.
The previous simple division algorithm has a running time of O ( n ) and storage requirements of O (1). Why?
Because the standard division algorithm we learn early on is to take each digit, left to right, and divide by 2,
and then take the remainder, multiply by 10, and add it to the next digit, divide again, and so on. Each step
takes about three operations, and there will about the same number of steps as there are digits, giving us about
3n operations or thereabouts — this is a simple multiple of n; thus, our running time is O(n). Since we didn't
have to keep track of more than one or two variables other than the input, we have a constant amount of storage.
Algorithms with O(n) running time are typically called linear algorithms.
This is a suboptimal algorithm, though. We might also recall from elementary school that the last digit can
tell us if a number is even or odd. If the last digit is even, the entire number is even; otherwise, both are odd.
This means that we could devise an O (1) even-checking algorithm by just checking the last digit. This always
takes exactly two operations: one to divide, and one to check if the remainder is zero (the number is even) or
one (the number is odd). Algorithms that run in less than O(n) time are called sublinear.
Who needs all that input anyway, right? It's just taking up space.
If an algorithm runs in O ( n p ) time, where p is some fixed number, then it is said to be a polynomial-time
algorithm . This is an important class, often denoted simply as P. The reason this class is important is be-
cause it is merely a polynomial-time algorithm — other worse-performing classes of algorithms exist, known as
superpolynomial-timealgorithms. For example, the class of algorithms that can be bounded in O ( a n ) time (for
some number a ) is called an exponential-time algorithm. In general, superpolynomial- and exponential-time
algorithms take significantly longer to run than polynomial-time algorithms. Algorithms that take less than an
exponential of n to finish are called subexponential . Figure 3-1 shows the running time of the various classes
of algorithms.
 
 
Search WWH ::




Custom Search