Information Technology Reference
In-Depth Information
Here, because n is not so large, a computing time of O ( n )oreven O ( n 2 )for
the discrepancy is acceptable (in contrast to RNGs). The choice of discrepancy
can be different for this reason.
In practice, the worst-case error bound (2) is usually much too dicult to
compute or even to approximate, and may be very loose for our specific function
f . For these reasons, one would rather turn the deterministic approximation
μ n into a randomized unbiased estimator, and replace the error bound by a
probabilistic error estimate obtained by estimating the variance of the estimator.
This is achieved by randomizing the point set P n so that [16,17,13,18]:
(a) it retains its high uniformity when taken as a set and
(b) each individual point is a random vector with the uniform distribu-
tion over (0 , 1) s .
A randomized QMC (RQMC) estimator of μ , denoted μ n, rqmc , is defined as
the average (1) in which the u i are replaced by the n randomized points. By
performing m independent replicates of this randomization, and computing the
sample mean and sample variance of the m independent realizations of μ n, rqmc ,
one obtains an unbiased estimator of μ and an unbiased estimator of the variance
of this estimator. This permit one to obtain an asymptotically valid confidence
interval on μ [17,13].
A simple randomization that satisfies these conditions is a random shift mod-
ulo 1 [19,17,20]: Generate a single point U uniformly over (0 , 1) s and add it to
each point of P n , modulo 1, coordinate-wise. Another one is a random digital
shift in base b [21,13,22]: generate U uniformly over (0 , 1) s , expand each of its
coordinates in base b , and add the digits, modulo b , to the corresponding dig-
its of each point of P n .For b = 2, this amounts to applying a coordinate-wise
exclusive-or (xor) of U to all the points.
The variance of the RQMC estimator is the same as its mean square error,
because it is unbiased, so by squaring on each side of (2) we obtain that
Var[ μ n, rqmc ]
[ D 2 ( P n )] ,
so the variance converges at least as fast as the mean square discrepancy.
A low-discrepancy sequence is an infinite sequence { u 0 , u 1 , u 2 ,...} such that
the point set P n formed by the first n points of the sequence has low discrepancy
for all large enough n . Often, the discrepancy is lower along a subsequence of
values of n ; e.g., if n is a power of a given integer. Such sequences are convenient
if we want to increase n (adding more points to the set) until some error bound
or error estimate is deemed small enough, instead of fixing n apriori.Alow
discrepancy point set of sequence can also be infinite-dimensional ; i.e., each point
has an infinite sequence of coordinates. In this case, the sequence of coordinates
are usually defined by a recurrence. This is convenient when f ( U ) depends on a
random and unbounded number of uniform random variables, which is frequent.
V 2 ( f )
· E
1.3 Importance of Low-Dimensional Projections
When the dimension s is large, filling up the s -dimensional unit cube evenly
requires too many points. For s = 100, for example, we already need 2 100 points
Search WWH ::




Custom Search