Graphics Reference
In-Depth Information
adjacent squares crosses the pixel grid diagonally, and we use a single ray through
each pixel center to estimate the reflected light, we get aliasing, as we saw in
Chapter 18. Using distribution ray tracing (or, equivalently, using Monte Carlo
integration) tends to replace this aliasing with noise, which is less visually dis-
tracting. So one way to choose a sampling strategy is to ask, “What kinds of noise
do we prefer, if we have to have noise at all?”
Yellot [Yel83] suggests that the frequency spectrum of the generated samples
can be used to predict the kinds of noise we'll see. If there's lots of energy at some
frequency f , and the signal we're sampling also has energy at or near f ,we'll
tend to see lots of aliasing rather than noise. And if there's lots of low-frequency
energy in the spectrum, the aliases produced will tend to be low-frequency, which
are more noticeable than high-frequency ones. In graphics, a sampling pattern is
said to be a blue noise distribution if it lacks low-frequency energy and lacks any
energy spikes. (The term is generally used for something more specific, namely,
one in which the spectral power in each octave increases by a fixed amount so
that the power density is proportional to the frequency.) Yellot gives evidence
that the pattern of retinal cells in the eye follows a blue-noise distribution. And
the good antialiasing properties certainly suggest that such distributions are good
candidates for sampling, as Cook noted. Mitchell [Mit87] notes that the stratified
sampling Cook proposes has the blue-noise property, at least weakly, but that other
processes can generate much better blue noise. For instance, the Poisson disk pro-
cess (initialize a kept list to be empty; repeatedly pick a point uniformly at random;
reject it if it's too near any other points you've kept, otherwise keep it) generates
very nice blue noise. It's unfortunately somewhat slow. Mitchell presents a faster
algorithm, and Fattal [Fat11] has developed a very fast alternative that represents
the current state of the art.
In our rendering code, we've divided light into “diffusely scattered” and
“impulse scattered,” on the grounds that the spikes in the BSDF for a mirror or
an air-glass interface have values that are so much larger than those nearby that
they are qualitatively different. But this fails to address the important phenomenon
of very glossy reflection (like the reflection from a well-waxed floor). The glossier
your materials are, the more difficult efficient sampling becomes. When we want
to compute scattered rays from a surface element, we can always sample outgo-
ing directions
v o with a uniform or cosine-weighted distribution, and then assign
a weight to the sample that's proportional to the scattering value f s (
v i ,
v o ) ,but
such samples will be ineffective for estimating the integral when
v o )
is highly spiked (assuming the incoming radiance is fairly uniform). At the very
least, it's best if your BSDF model provides a sampling function that can gen-
erate samples in proportion to
v o
f s (
v i ,
v o ) , although to accurately estimate
the reflectance integral, you must also pay attention to the distribution of arriv-
ing radiance, which itself is dependent on the emitted radiance and the visibility
function. The only algorithm we know that is designed to simultaneously consider
all three—the variation in the BSDF, the emitted radiance, and the visibility—is
Metropolis light transport, but it comes with its own challenges, such as start-up
bias and the difficulty of designing effective mutations and correctly computing
their probabilities.
To return to the matter of path-tracer/ray-tracer-style rendering, the goal to
keep in mind is variance reduction : If you can accurately estimate some part of
an integral by a direct technique, you may be able to substantially reduce the
variance of the overall estimate. Of course, it's important to reduce variance while
v o
f s (
v i ,
 
Search WWH ::




Custom Search