Java Reference
In-Depth Information
a constant run time , which is represented in Big O notation as O ( 1 ) and pronounced as
“order one.” An algorithm that's O (1) does not necessarily require only one comparison.
O (1) just means that the number of comparisons is constant —it does not grow as the size
of the array increases. An algorithm that tests whether the first element of an array is equal
to any of the next three elements is still O (1) even though it requires three comparisons.
19.3.2 O( n ) Algorithms
An algorithm that tests whether the first array element is equal to any of the other array
elements will require at most n - 1 comparisons, where n is the number of array elements.
If the array has 10 elements, this algorithm requires up to nine comparisons. If the array
has 1000 elements, it requires up to 999 comparisons. As n grows larger, the n part of the
expression “dominates,” and subtracting one becomes inconsequential. Big O is designed
to highlight these dominant terms and ignore terms that become unimportant as n grows.
For this reason, an algorithm that requires a total of n - 1 comparisons (such as the one
we described earlier) is said to be O ( n ) . An O ( n ) algorithm is referred to as having a linear
run time . O ( n ) is often pronounced “on the order of n ” or simply “order n .”
19.3.3 O( n 2 ) Algorithms
Now suppose you have an algorithm that tests whether any element of an array is dupli-
cated elsewhere in the array. The first element must be compared with every other element
in the array. The second element must be compared with every other element except the
first (it was already compared to the first). The third element must be compared with every
other element except the first two. In the end, this algorithm will end up making ( n - 1)
+ ( n - 2) + … + 2 + 1 or n 2 /2 - n /2 comparisons. As n increases, the n 2 term dominates
and the n term becomes inconsequential. Again, Big O notation highlights the n 2 term,
leaving n /2. But, as we'll soon see, constant factors are omitted in Big O notation.
Big O is concerned with how an algorithm's run time grows in relation to the number
of items processed. Suppose an algorithm requires n 2 comparisons. With four elements,
the algorithm requires 16 comparisons; with eight elements, 64 comparisons. With this
algorithm, doubling the number of elements quadruples the number of comparisons. Con-
sider a similar algorithm requiring n 2 /2 comparisons. With four elements, the algorithm
requires eight comparisons; with eight elements, 32 comparisons. Again, doubling the
number of elements quadruples the number of comparisons. Both of these algorithms grow
as the square of n , so Big O ignores the constant and both algorithms are considered to be
O ( n 2 ) , which is referred to as quadratic run time and pronounced “on the order of n -
squared” or more simply “order n -squared.”
When n is small, O ( n 2 ) algorithms (running on today's computers) will not noticeably
affect performance. But as n grows, you'll start to notice the performance degradation. An
O ( n 2 ) algorithm running on a million-element array would require a trillion “operations”
(where each could actually require several machine instructions to execute). This could
take several hours. A billion-element array would require a quintillion operations, a
number so large that the algorithm could take decades! O ( n 2 ) algorithms, unfortunately,
are easy to write, as you'll see in this chapter. You'll also see algorithms with more favorable
Big O measures. These efficient algorithms often take a bit more cleverness and work to
create, but their superior performance can be well worth the extra effort, especially as n
gets large and as algorithms are combined into larger programs.
 
 
 
Search WWH ::




Custom Search