Java Reference
In-Depth Information
In practice, the Skip List will probably have better performance than a BST. The
BST can have bad performance caused by the order in which data are inserted. For
example, if n nodes are inserted into a BST in ascending order of their key value,
then the BST will look like a linked list with the deepest node at depth n 1. The
Skip List's performance does not depend on the order in which values are inserted
into the list. As the number of nodes in the Skip List increases, the probability of
encountering the worst case decreases geometrically. Thus, the Skip List illustrates
a tension between the theoretical worst case (in this case, (n) for a Skip List
operation), and a rapidly increasing probability of average-case performance of
(log n), that characterizes probabilistic data structures.
16.3
Numerical Algorithms
This section presents a variety of algorithms related to mathematical computations
on numbers. Examples are activities like multiplying two numbers or raising a
number to a given power. In particular, we are concerned with situations where
built-in integer or floating-point operations cannot be used because the values being
operated on are too large. Similar concerns arise for operations on polynomials or
matrices.
Since we cannot rely on the hardware to process the inputs in a single constant-
time operation, we are concerned with how to most effectively implement the op-
eration to minimize the time cost. This begs a question as to how we should apply
our normal measures of asymptotic cost in terms of growth rates on input size.
First, what is an instance of addition or multiplication? Each value of the operands
yields a different problem instance. And what is the input size when multiplying
two numbers? If we view the input size as two (since two numbers are input), then
any non-constant-time algorithm has a growth rate that is infinitely high compared
to the growth of the input. This makes no sense, especially in light of the fact that
we know from grade school arithmetic that adding or multiplying numbers does
seem to get more difficult as the value of the numbers involved increases. In fact,
we know from standard grade school algorithms that the cost of standard addition
is linear on the number of digits being added, and multiplication has cost nm
when multiplying an m-digit number by an n-digit number.
The number of digits for the operands does appear to be a key consideration
when we are performing a numeric algorithm that is sensitive to input size. The
number of digits is simply the log of the value, for a suitable base of the log. Thus,
for the purpose of calculating asymptotic growth rates of algorithms, we will con-
sider the “size” of an input value to be the log of that value. Given this view, there
are a number of features that seem to relate such operations.
• Arithmetic operations on large values are not cheap.
• There is only one instance of value n.
Search WWH ::




Custom Search