Java Reference
In-Depth Information
Note: What's best?
Usually the “best” solution to a problem balances various criteria such as time, space, gener-
ality, programming effort, and so on.
The process of measuring the complexity of algorithms is called the analysis of algorithms . We
will concentrate on the time complexity of algorithms, because it is usually more important than the
space complexity. You should realize that an inverse relationship often exists between an algorithm's
time complexity and its space complexity. If you revise an algorithm to save execution time, you usually
will need more space. If you reduce an algorithm's space requirement, it likely will require more time to
execute. Sometimes, however, you will be able to save both time and space.
Your measure of the complexity of an algorithm should be easy to compute, certainly easier
than implementing the algorithm. You should express this measure in terms of the size of the prob-
lem. This problem size is the number of items that an algorithm processes. For example, if you are
searching a collection of data, the problem size is the number of items in the collection. Such a
measure enables you to compare the relative cost of algorithms as a function of the size of the prob-
lem. Typically, we are interested in large problems; a small problem is likely to take little time,
even if the algorithm is inefficient.
4.5
Realize that you cannot compute the actual time requirement of an algorithm. After all, you have
not implemented the algorithm in Java and you have not chosen the computer. Instead, you find a
function of the problem size that behaves like the algorithm's actual time requirement. Therefore,
as the time requirement increases by some factor, the value of the function increases by the same
factor, and vice versa. The value of the function is said to be directly proportional to the time
requirement. Such a function is called a growth-rate function because it measures how an algo-
rithm's time requirement grows as the problem size grows. Because they measure time require-
ments, growth-rate functions have positive values. By comparing the growth-rate functions of two
algorithms, you can see whether one algorithm is faster than the other for large-size problems.
4.6
Example. Consider again the problem of computing the sum 1 + 2 + . . . + n for any positive integer
n . Figure 4-1 gives three algorithms—A, B, and C— to perform this computation. Algorithm A
computes the sum 0 + 1 + 2 + . . . + n from left to right. Algorithm B computes 0 + (1) + (1 + 1) + (1
+ 1 + 1) + . . . + (1 + 1 + . . . + 1), and Algorithm C uses an algebraic identity to compute the sum. By
executing the Java code in Segment 4.2, we found that Algorithm B is the slowest. We now want to
predict this behavior without actually running the code.
So how can we tell which algorithm is slowest and which is fastest? We can begin to answer
these questions by considering both the size of the problem and the effort involved. The integer n is
a measure of the problem size: As n increases, the sum involves more terms. To measure the effort,
or time requirement, of an algorithm, we must find an appropriate growth-rate function. To do so,
we might begin by counting the number of operations required by the algorithm.
For example, Algorithm A in Figure 4-1 contains the pseudocode statement
for i = 1 to n
This statement represents the following loop-control logic:
i = 1
while (i <= n)
{
...
i = i + 1
}
 
Search WWH ::




Custom Search