Java Reference
In-Depth Information
When analyzing these two code fragments, we will assume that n is
a power of two. The first code fragment has its outer for loop executed
log n + 1 times because on each iteration k is multiplied by two until it
reaches n. Because the inner loop always executes n times, the total cost for
the first code fragment can be expressed as P logn
i=0 n. Note that a variable
substitution takes place here to create the summation, with k = 2 i . From
Equation 2.3, the solution for this summation is (n log n). In the second
code fragment, the outer loop is also executed log n + 1 times. The inner
loop has cost k, which doubles each time. The summation can be expressed
as P logn
i=0 2 i where n is assumed to be a power of two and again k = 2 i .
From Equation 2.8, we know that this summation is simply (n).
What about other control statements? While loops are analyzed in a manner
similar to for loops. The cost of an if statement in the worst case is the greater
of the costs for the then and else clauses. This is also true for the average case,
assuming that the size of n does not affect the probability of executing one of the
clauses (which is usually, but not necessarily, true). For switch statements, the
worst-case cost is that of the most expensive branch. For subroutine calls, simply
add the cost of executing the subroutine.
There are rare situations in which the probability for executing the various
branches of an if or switch statement are functions of the input size. For exam-
ple, for input of size n, the then clause of an if statement might be executed with
probability 1=n. An example would be an if statement that executes the then
clause only for the smallest of n values. To perform an average-case analysis for
such programs, we cannot simply count the cost of the if statement as being the
cost of the more expensive branch. In such situations, the technique of amortized
analysis (see Section 14.3) can come to the rescue.
Determining the execution time of a recursive subroutine can be difficult. The
running time for a recursive subroutine is typically best expressed by a recurrence
relation. For example, the recursive factorial function fact of Section 2.5 calls
itself with a value one less than its input value. The result of this recursive call is
then multiplied by the input value, which takes constant time. Thus, the cost of
the factorial function, if we wish to measure cost in terms of the number of multi-
plication operations, is one more than the number of multiplications made by the
recursive call on the smaller input. Because the base case does no multiplications,
its cost is zero. Thus, the running time for this function can be expressed as
T(n) = T(n 1) + 1 for n > 1; T(1) = 0:
We know from Examples 2.8 and 2.13 that the closed-form solution for this recur-
rence relation is (n).
Search WWH ::




Custom Search