Graphics Reference
In-Depth Information
a mirror surface, the answer is “the mirror direction”; for a diffuse surface,
it's “light coming from any direction could produce light in the eye-ray
direction.” For others, the answer lies somewhere in between. We need,
given the direction
v o , to be able to draw samples from S + in a way that's
proportional to
v i
f s (
v i ,
v o ) (perhaps multiplied by a cosine factor).
In methods like photon mapping, where we “push” light out from sources
rather than “gathering it toward the eye,” we instead need to address the
problem: “Given that some light arrived at this surface point P from direc-
tion
v i , it will be scattered in many directions. Randomly provide me an
outgoing direction
v o where the probability of selecting
v o is proportional
v o ) .”
Clearly these two problems are closely related.
The collecting of samples is done with a sampling strategy. What we've out-
lined above—“Give me a sample that's proportional to something”—comes up
both in the Metropolis algorithm and in importance sampling. Other sampling
strategies can be used in other approaches to integration. Sometimes it's important
to be certain that you've got samples over a whole domain, rather than acciden-
tally having them all cluster in one area; in such cases, stratified sampling, Poisson
disk sampling, and other strategies can generate good results without introducing
harmful artifacts.
Regardless of the sampling approach that we use, the values we compute are
always random variables, and their properties, as estimators of the corresponding
integrals, are how we can measure the performance of the various algorithms.
Before we proceed, here's a brief review of what we learned about Monte
Carlo integration (following Kellerman and Szirmay-Kalos [KSKAC02]).
To compute the integral of a function f over some domain H , we represent it
as an expected value of a random variable:
to f s (
v i ,
f ( x ) dx =
f ( x )
p ( x ) p ( x ) dx
(31.71)
H
H
= E f ( x )
p ( x )
,
(31.72)
where p is a probability density on the domain H . To estimate the expected value,
we draw N mutually uncorrelated samples according to p and average them, result-
ing in
f ( x ) dx = E f ( x )
p ( x )
(31.73)
H
N
1
N
f ( X i )
p ( X i ) ,
(31.74)
i = 1
where the standard deviation of this approximation is N ,
2 being the variance
σ
of the random variable f ( X )
p ( X ) . By choosing p to be large where f is large
and small where f is small, we reduce this variance. This is called importance
sampling, with p being the importance function. (If the samples are correlated,
the variance reduction is not nearly as rapid; we'll return to this later.)
With this in mind, let's begin with a general approach to approximating the
series solution to the rendering equation.
/
 
Search WWH ::




Custom Search