Database Reference
In-Depth Information
The Naïve Monte Carlo Algorithm proceeds by computing the Naïve Estimator N times
and returning their mean; thus, instead of the exact probability p = P () , the algorithm returns an
approximate value
p . If we use 0-1-random variable Z k to represent the outcome of the k -th call of
the Naïve Estimator, the result of the algorithm can be modeled by a random variable
ˆ
k = 1 Z k
N
p =
The expected value of Z k is E
[ Z k ]= P () , which implies that Z k is an unbiased estimator
for P () , and, hence, so is
p approximate p . This
depends on the number of steps N , as follows. Fix ε> 0 and δ> 0. We say that the algorithm is an
(ε, δ) -approximation, if:
p : E
[ p ]= p . The question is, how well does
Pr ( | p p | · p) δ
The probability above, Pr , is taken over the random choices of the algorithm, and should
not be confused with the probability p
P () that we are trying to compute. In other words, the
algorithm is an (ε, δ) -approximation, if the probability that it makes an error worse than ε is smaller
than δ . By choosing
=
4
log δ
2
·
N =
(5.8)
We obtain
Pr (
p
p
|
·
p)
δ
P () , then we must
run the Naïve Monte Carlo algorithm for N steps, where N is given by the formula above.
A better way to look at the formula above is to see it as giving us an approximation interval
[ L, U ]
In other words: if we want to compute a (ε, δ) -approximation of p
=
δ (a typical confidence value of
0.9 would require δ = 0 . 1). Then, after N steps of the algorithm, we can compute from the formula
above the value
for p , which improves with N . Fix the desired confidence 1
4
pN ·
log 2
δ
ε
=
.
Notice that the relationship between N and ε depends on p , which is, of course, unknown.
Worse, p may be as small as 1 / 2 n , where n is the number of variables, and therefore, in theory, N
Then, we are guaranteed (with confidence 1
δ ) that p ∈[ L, U ]=[ p ε/ 2 ,
p + ε/ 2
]
 
Search WWH ::




Custom Search