Java Reference
In-Depth Information
performance for others.
They also show that for some algorithms, the order of
input has little effect.
These figures show a number of interesting results. As expected, the O(n 2 )
sorts are quite poor performers for large arrays. Insertion Sort is by far the best of
this group, unless the array is already reverse sorted. Shellsort is clearly superior
to any of these O(n 2 ) sorts for lists of even 100 elements. Optimized Quicksort is
clearly the best overall algorithm for all but lists of 10 elements. Even for small
arrays, optimized Quicksort performs well because it does one partition step be-
fore calling Insertion Sort. Compared to the other O(n log n) sorts, unoptimized
Heapsort is quite slow due to the overhead of the class structure. When all of this
is stripped away and the algorithm is implemented to manipulate an array directly,
it is still somewhat slower than mergesort. In general, optimizing the various algo-
rithms makes a noticeable improvement for larger array sizes.
Overall, Radix Sort is a surprisingly poor performer. If the code had been tuned
to use bit shifting of the key value, it would likely improve substantially; but this
would seriously limit the range of element types that the sort could support.
7.9
Lower Bounds for Sorting
This topic contains many analyses for algorithms. These analyses generally define
the upper and lower bounds for algorithms in their worst and average cases. For
many of the algorithms presented so far, analysis has been easy. This section con-
siders a more difficult task — an analysis for the cost of a problem as opposed to an
algorithm. The upper bound for a problem can be defined as the asymptotic cost of
the fastest known algorithm. The lower bound defines the best possible efficiency
for any algorithm that solves the problem, including algorithms not yet invented.
Once the upper and lower bounds for the problem meet, we know that no future
algorithm can possibly be (asymptotically) more efficient.
A simple estimate for a problem's lower bound can be obtained by measuring
the size of the input that must be read and the output that must be written. Certainly
no algorithm can be more efficient than the problem's I/O time. From this we see
that the sorting problem cannot be solved by any algorithm in less than (n) time
because it takes at least n steps to read and write the n values to be sorted. Alter-
natively, any sorting algorithm must at least look at every input vale to recognize
whether the input values are in sort order. So, based on our current knowledge of
sorting algorithms and the size of the input, we know that the problem of sorting is
bounded by (n) and O(n log n).
Computer scientists have spent much time devising efficient general-purpose
sorting algorithms, but no one has ever found one that is faster than O(n log n) in
the worst or average cases. Should we keep searching for a faster sorting algorithm?
Search WWH ::




Custom Search