Java Reference
In-Depth Information
the concept of an adversarial lower bounds proof. Section 15.5 illustrates the con-
cept of a state space lower bound. Section 15.6 presents a linear time worst-case
algorithm for finding the ith biggest element on a list. Section 15.7 continues our
discussion of sorting with a quest for the algorithm that requires the absolute fewest
number of comparisons needed to sort a list.
15.1
Introduction to Lower Bounds Proofs
The lower bound for the problem is the tightest (highest) lower bound that we can
prove for all possible algorithms that solve the problem. 1 This can be a difficult bar,
given that we cannot possibly know all algorithms for any problem, because there
are theoretically an infinite number. However, we can often recognize a simple
lower bound based on the amount of input that must be examined. For example,
we can argue that the lower bound for any algorithm to find the maximum-valued
element in an unsorted list must be (n) because any algorithm must examine all
of the inputs to be sure that it actually finds the maximum value.
In the case of maximum finding, the fact that we know of a simple algorithm
that runs in O(n) time, combined with the fact that any algorithm needs (n) time,
is significant. Because our upper and lower bounds meet (within a constant factor),
we know that we do have a “good” algorithm for solving the problem. It is possible
that someone can develop an implementation that is a “little” faster than an existing
one, by a constant factor. But we know that its not possible to develop one that is
asymptotically better.
We must be careful about how we interpret this last statement, however. The
world is certainly better off for the invention of Quicksort, even though Mergesort
was available at the time. Quicksort is not asymptotically faster than Mergesort, yet
is not merely a “tuning” of Mergesort either. Quicksort is a substantially different
approach to sorting. So even when our upper and lower bounds for a problem meet,
there are still benefits to be gained from a new, clever algorithm.
So now we have an answer to the question “How do I know if I have a good
algorithm to solve a problem?” An algorithm is good (asymptotically speaking) if
its upper bound matches the problem's lower bound. If they match, we know to
stop trying to find an (asymptotically) faster algorithm. What if the (known) upper
bound for our algorithm does not match the (known) lower bound for the problem?
In this case, we might not know what to do. Is our upper bound flawed, and the
algorithm is really faster than we can prove? Is our lower bound weak, and the true
lower bound for the problem is greater? Or is our algorithm simply not the best?
1 Throughout this discussion, it should be understood that any mention of bounds must specify
what class of inputs are being considered. Do we mean the bound for the worst case input? The
average cost over all inputs? Regardless of which class of inputs we consider, all of the issues raised
apply equally.
Search WWH ::




Custom Search