Java Reference
In-Depth Information
A
B
C
G
E
D
F
Figure15.1 Illustration of using a poset to model our current knowledge of the
relationships among a collection of objects. A directed acyclic graph (DAG) is
used to draw the poset (assume all edges are directed downward). In this example,
our knowledge is such that we don't know how A or B relate to any of the other
objects. However, we know that both C and G are greater than E and F. Further,
we know that C is greater than D, and that E is greater than F.
comparing elements in L we have at least nm partial orders. Every such partial
order needs at least one comparison against K to make sure that K is not somewhere
in that partial order. Thus, any algorithm must make at least n comparisons in the
worst case.
2
15.2.2
Searching in Sorted Lists
We will now assume that list L is sorted. In this case, is linear search still optimal?
Clearly no, but why not? Because we have additional information to work with that
we do not have when the list is unsorted. We know that the standard binary search
algorithm has a worst case cost of O(log n). Can we do better than this? We can
prove that this is the best possible in the worst case with a proof similar to that used
to show the lower bound on sorting.
Again we use the decision tree to model our algorithm. Unlike when searching
an unsorted list, comparisons between elements of L tell us nothing new about their
relative order, so we consider only comparisons between K and an element in L. At
the root of the decision tree, our knowledge rules out no positions in L, so all are
potential candidates. As we take branches in the decision tree based on the result
of comparing K to an element in L, we gradually rule out potential candidates.
Eventually we reach a leaf node in the tree representing the single position in L
that can contain K. There must be at least n + 1 nodes in the tree because we have
n + 1 distinct positions that K can be in (any position in L, plus not in L at all).
Some path in the tree must be at least log n levels deep, and the deepest node in the
tree represents the worst case for that algorithm. Thus, any algorithm on a sorted
array requires at least (log n) comparisons in the worst case.
We can modify this proof to find the average cost lower bound. Again, we
model algorithms using decision trees. Except now we are interested not in the
depth of the deepest node (the worst case) and therefore the tree with the least-
deepest node. Instead, we are interested in knowing what the minimum possible is
Search WWH ::




Custom Search