Java Reference
In-Depth Information
that can handle n airplanes quickly enough most of the time, but which fails to
perform quickly enough when all n airplanes are coming from the same direction.
For other applications — particularly when we wish to aggregate the cost of
running the program many times on many different inputs — worst-case analy-
sis might not be a representative measure of the algorithm's performance. Often
we prefer to know the average-case running time. This means that we would like
to know the typical behavior of the algorithm on inputs of size n. Unfortunately,
average-case analysis is not always possible. Average-case analysis first requires
that we understand how the actual inputs to the program (and their costs) are dis-
tributed with respect to the set of all possible inputs to the program. For example, it
was stated previously that the sequential search algorithm on average examines half
of the array values. This is only true if the element with value K is equally likely
to appear in any position in the array. If this assumption is not correct, then the
algorithm does not necessarily examine half of the array values in the average case.
See Section 9.2 for further discussion regarding the effects of data distribution on
the sequential search algorithm.
The characteristics of a data distribution have a significant effect on many
search algorithms, such as those based on hashing (Section 9.4) and search trees
(e.g., see Section 5.4). Incorrect assumptions about data distribution can have dis-
astrous consequences on a program's space or time performance.
Unusual data
distributions can also be used to advantage, as shown in Section 9.2.
In summary, for real-time applications we are likely to prefer a worst-case anal-
ysis of an algorithm. Otherwise, we often desire an average-case analysis if we
know enough about the distribution of our input to compute the average case. If
not, then we must resort to worst-case analysis.
3.3
A Faster Computer, or a Faster Algorithm?
Imagine that you have a problem to solve, and you know of an algorithm whose
running time is proportional to n 2 . Unfortunately, the resulting program takes ten
times too long to run. If you replace your current computer with a new one that
is ten times faster, will the n 2 algorithm become acceptable? If the problem size
remains the same, then perhaps the faster computer will allow you to get your work
done quickly enough even with an algorithm having a high growth rate. But a funny
thing happens to most people who get a faster computer. They don't run the same
problem faster. They run a bigger problem! Say that on your old computer you
were content to sort 10,000 records because that could be done by the computer
during your lunch break. On your new computer you might hope to sort 100,000
records in the same time. You won't be back from lunch any sooner, so you are
better off solving a larger problem. And because the new machine is ten times
faster, you would like to sort ten times as many records.
Search WWH ::




Custom Search