Cryptography Reference
In-Depth Information
with the order of magnitude complexity. The greatest merit of this method for
estimating execution time is that it is machine-independent. In other words, it
does not rely upon the specifics of a given computer, so the order of magnitude
complexity remains the same, irrespective of the computer being used. In the
analysis of the complexity of an algorithm, we need not know exactly how long
it takes (namely, the exact number of bit operations required to execute the
algorithm), but rather it suGces to compare with other objects, and these com-
parisons need not be immediate, but rather long term. In other words, what
Definition A.45 says is that if f is O ( g ), then eventually f ( x ) is bounded by
some constant multiple cg ( x )of g ( x ). We do not know exactly what c happens
to be or just how big x must be before (A.14) occurs. However, for reasons
given above, it is enough to account for the eGciency of the given algorithm in
the worst-case scenario.
The amount of time taken by a computer to perform a task is (essentially)
proportional A.5 to the number of bit operations. In the simplest possible terms,
the constant of proportionality, which is the number of nanoseconds A.6 per
bit operation, depends upon the computer being used. This accounts for the
machine-independence of the Big O method of estimating complexity since the
constant of proportionality is of no consequence in the determination of Big O.
A fundamental time estimate in executing an algorithm is polynomial time
(or simply polynomial ) A.7 . In other words, an algorithm is polynomial when
its complexity is O ( n c ) for some constant c
+ , where n is the bitlength
of the input to the algorithm, and c is independent of n . (Observe that any
polynomial of degree c is O ( n c ).) In general, these are the desirable algorithms,
since they are the fastest. Therefore, roughly speaking, the polynomial-time
algorithms are the good or e 7 cient algorithms. For instance, the algorithm is
constant if c =0;if c = 1, it is linear; if c = 2, it is quadratic, and so on.
Examples of polynomial time algorithms are those for the ordinary arithmetic
operations of addition, subtraction, multiplication, and division. On the other
hand, those algorithms with complexity O ( c f ( n ) ) where c is constant and f is a
polynomial on n
R
are exponential time algorithms or simply exponential .A
subexponential time algorithm is one for which the complexity for input n
N
N
is
L n ( r, c )= O (exp(( c + o (1))(ln n ) r (ln ln n ) 1 r )
(A.16)
where r
with 0 <r< 1, c is a constant, and o (1) denotes a function
f ( n ) such that lim n →∞ f ( n )=0. A.8 Subexponential time algorithms are faster
than exponential-time algorithms but slower than polynomial-time algorithms.
These are, again roughly speaking, the ine 7 cient algorithms. Algorithms with
R
A.5 To say that a is proportional to b means that a/b = c , a constant, called the constant of
proportionality . This relationship is often written as a ∝ b in the literature.
A.6 A nanosecond is 1 / 10 9 of a second, that is, a billionth of a second.
A.7 Recall that a (nonconstant) polynomial is a function of the form i =0 a i x i for n
N
,
where the a i are the coe cients (see page 484).
A.8 In general, f ( n )= o ( g ( n )) means that lim n →∞ f ( n ) /g ( n )=0. Thus, o (1) is used to
symbolize a function whose limit as n approaches infinity is 0. Also, exp( x )= e x .
Search WWH ::




Custom Search