Java Reference
In-Depth Information
22.1 Introduction
Algorithm design is to develop a mathematical process for solving a problem.
Algorithm analysis is to predict the performance of an algorithm.
Key
Point
The preceding two chapters introduced classic data structures (lists, stacks, queues, priority
queues, sets, and maps) and applied them to solve problems. This chapter will use a vari-
ety of examples to introduce common algorithmic techniques (dynamic programming,
divide-and-conquer, and backtracking) for developing efficient algorithms. Later in the topic,
we will introduce efficient algorithms in Chapters  23-29. Before introducing developing
efficient algorithms, we need to address the question on how to measure algorithm efficiency.
22.2 Measuring Algorithm Efficiency Using
Big  O  Notation
The Big O notation obtains a function for measuring algorithm time complexity based
on the input size. You can ignore multiplicative constants and nondominating terms in
the function.
Key
Point
Suppose two algorithms perform the same task, such as search (linear search vs. binary
search). Which one is better? To answer this question, you might implement these algorithms
and run the programs to get execution time. But there are two problems with this approach:
what is algorithm efficiency?
First, many tasks run concurrently on a computer. The execution time of a particular
program depends on the system load.
Second, the execution time depends on specific input. Consider, for example, linear
search and binary search. If an element to be searched happens to be the first in the
list, linear search will find the element quicker than binary search.
It is very difficult to compare algorithms by measuring their execution time. To overcome
these problems, a theoretical approach was developed to analyze algorithms independent of
computers and specific input. This approach approximates the effect of a change on the size
of the input. In this way, you can see how fast an algorithm's execution time increases as the
input size increases, so you can compare two algorithms by examining their growth rates .
Consider linear search. The linear search algorithm compares the key with the elements in
the array sequentially until the key is found or the array is exhausted. If the key is not in the
array, it requires n comparisons for an array of size n . If the key is in the array, it requires n /2
comparisons on average. The algorithm's execution time is proportional to the size of the array.
If you double the size of the array, you will expect the number of comparisons to double. The
algorithm grows at a linear rate. The growth rate has an order of magnitude of n . Computer
scientists use the Big O notation to represent the “order of magnitude.” Using this notation,
the complexity of the linear search algorithm is O ( n ), pronounced as “ order of n .” We call an
algorithm with a time complexity of O(n) a linear algorithm, and it exhibits a linear growth rate.
For the same input size, an algorithm's execution time may vary, depending on the input.
An input that results in the shortest execution time is called the best-case input , and an input
that results in the longest execution time is the worst-case input. Best-case analysis and
worst-case analysis are to analyze the algorithms for their best-case input and worst-case
input. Best-case and worst-case analysis are not representative, but worst-case analysis is
very useful. You can be assured that the algorithm will never be slower than the worst case.
An average-case analysis attempts to determine the average amount of time among all pos-
sible inputs of the same size. Average-case analysis is ideal, but difficult to perform, because
for many problems it is hard to determine the relative probabilities and distributions of vari-
ous input  instances. Worst-case analysis is easier to perform, so the analysis is generally
conducted for the worst case.
growth rates
Big O notation
best-case input
worst-case input
average-case analysis
 
 
 
 
Search WWH ::




Custom Search