Java Reference
In-Depth Information
We now show that this is the same as
p
n
X
P(need at least i probes)
i=1
=
1 + (1 P
1
) + (1 P
1
P
2
) + + P
p
n
=
(P
1
+ ::: + P
p
n
) + (P
2
+ ::: + P
p
n
) +
(P
3
+ ::: + P
p
n
) +
1P
1
+ 2P
2
+ 3P
3
+ +
p
nP
p
n
=
We require at least two probes to set the bounds, so the cost is
p
n
X
2 +
P(need at least i probes):
i=3
We now make take advantage of a useful fact known as Cebysev's Inequality.
Cebysev's inequality states that P(need exactly i probes), or P
i
, is
P
i
p(1 p)n
1
4(i 2)
2
(i 2)
2
n
because p(1 p) 1=4 for any probability p. This assumes uniformly distributed
data. Thus, the expected number of probes is
p
n
X
X
1
4(i 2)
2
< 2 +
1
4
1
i
2
= 2 +
1
4
6
2:4112
2 +
i=3
i=1
Is QBS better than binary search? Theoretically yes, because O(log log n)
grows slower than O(log n). However, we have a situation here which illustrates
the limits to the model of asymptotic complexity in some practical situations. Yes,
c
1
log n does grow faster than c
2
log log n. In fact, it is exponentially faster! But
even so, for practical input sizes, the absolute cost difference is fairly small. Thus,
the constant factors might play a role. First we compare lg lg n to lg n.
Factor
n
lg n lg lg n Dierence
16
4
2
2
256
8
3
2:7
2
16
16
4
4
2
32
32
5
6:4