Graphics Reference
In-Depth Information
where the X i s are all uniformly distributed on [ a , b ] , and are all independent, then
the expected value of ( b
,is b
a
a ) Y n ,for n = 1, 2,
...
f ( x ) dx , while the variance
of Y n is n Var [ Y 1 ] .
This sequence of random variables has the property that as n goes to infin-
ity, the variance goes to zero. That (together with the fact that E [( b
a ) Y n ]=
b
a f ( x ) dx for every n ) makes it a useful tool in estimating the integral: We know
that if we take enough samples, we'll get closer and closer to the correct value.
To bring these notions back to graphics, when we recursively ray-trace, we
find a ray-surface intersection, and then, if the surface is glossy, recursively trace a
few more rays from that intersection point (using the BRDF to guide our random
choice of recursive rays). At the next pixel in the image, we may hit a nearby
point on the glossy surface, and trace a different few recursive rays, again chosen
randomly. We are using those few recursive ray samples to estimate the total light
arriving at the glossy surface (i.e., to estimate an integral). Even if the light arriving
at the two nearby points of the surface is nearly identical, our estimates of it may
not be identical. This leads to the appearance of noise in the image. The fact that
choosing more samples leads to reduced variance in the estimator means that if
we increase the number of recursive rays sufficiently, the noise they cause in the
image will be insignificant.
In general, we've got some quantity C (like the integral of f , above), that we'd
like to evaluate, and some randomized algorithm that produces an estimate of C .
(Or, on the mathematical side, we have a random variable whose expectation is
[or is near] the desired value.) We call such a random variable an estimator for the
quantity. The estimator is unbiased if its expected value is actually C . Generally,
unbiased estimators are to be preferred over biased ones, in the absence of other
factors.
Estimators, being random variables, also have variance . Small variance is
generally preferred over large variance. Unfortunately, there tend to be tradeoffs:
Bias and variance are at odds with each other.
When we have a sequence of estimators like Y 1 , Y 2 ,
above, we can ask not
whether Y k is biased, but whether the bias in Y k decreases to zero as k gets large, as
does the variance. If both of these happen, then the sequence of estimators is called
consistent. Clearly, consistency is a desirable property in an estimator: It suggests
that as you do more work, you can be confident that the results you're getting are
better and better, rather than getting closer and closer to a wrong answer!
These are informal descriptions of estimators, bias, and consistency; making
these notions really precise requires mathematics beyond the scope of this topic.
See, for example, Feller [Fel68].
...
30.5 Importance Sampling and Integration
Let's return for one more look at the problem of computing the integral of a func-
tion f on the interval [ a , b ] , again as a proxy for computing the integral in the
reflectance equation. In our previous efforts, we used uniformly distributed ran-
dom variables to sample from the interval [ a , b ] . Now let's see what happens when
we use a random variable with some different distribution g . Since g will favor
picking numbers in some parts of [ a , b ] over others, we can't use the samples to
directly estimate the integral as before. Instead, we have to compensate for the
effect of g : We do the same computation as before, but include a division by g .
The result is the importance-sampled single-sample estimate theorem.
 
 
Search WWH ::




Custom Search