Java Reference
In-Depth Information
of the header node) must be set to 1. The new node is inserted, yielding the Skip
List of Figure 16.5(a).
Next, insert the value 20. Assume this time that randomLevel returns 0. The
search process goes to the node with value 10, and the new node is inserted after,
as shown in Figure 16.5(b). The third node inserted has value 5, and again assume
that randomLevel returns 0. This yields the Skip List of Figure 16.5.c.
The fourth node inserted has value 2, and assume that randomLevel re-
turns 3. This means that the level of the Skip List must rise, causing the header
node to gain an additional two ( null ) pointers. At this point, the new node is
added to the front of the list, as shown in Figure 16.5(d).
Finally, insert a node with value 30 at level 2. This time, let us take a close
look at what array update is used for. It stores the farthest node reached at each
level during the search for the proper location of the new node. The search pro-
cess begins in the header node at level 3 and proceeds to the node storing value 2.
Because forward[3] for this node is null , we cannot go further at this level.
Thus, update[3] stores a pointer to the node with value 2. Likewise, we cannot
proceed at level 2, so update[2] also stores a pointer to the node with value 2.
At level 1, we proceed to the node storing value 10. This is as far as we can go
at level 1, so update[1] stores a pointer to the node with value 10. Finally, at
level 0 we end up at the node with value 20. At this point, we can add in the new
node with value 30. For each value i , the new node's forward[i] pointer is
set to be update[i]->forward[i] , and the nodes stored in update[i] for
indices 0 through 2 have their forward[i] pointers changed to point to the new
node. This “splices” the new node into the Skip List at all levels.
The remove function is left as an exercise. It is similar to insertion in that the
update array is built as part of searching for the record to be deleted. Then those
nodes specified by the update array have their forward pointers adjusted to point
around the node being deleted.
A newly inserted node could have a high level generated by randomLevel ,
or a low level. It is possible that many nodes in the Skip List could have many
pointers, leading to unnecessary insert cost and yielding poor (i.e., (n)) perfor-
mance during search, because not many nodes will be skipped. Conversely, too
many nodes could have a low level. In the worst case, all nodes could be at level 0,
equivalent to a regular linked list. If so, search will again require (n) time. How-
ever, the probability that performance will be poor is quite low. There is only one
chance in 1024 that ten nodes in a row will be at level 0. The motto of probabilistic
data structures such as the Skip List is “Don't worry, be happy.” We simply accept
the results of randomLevel and expect that probability will eventually work in
our favor. The advantage of this approach is that the algorithms are simple, while
requiring only (log n) time for all operations in the average case.
Search WWH ::




Custom Search