Graphics Reference
In-Depth Information
need to integrate over time. The camera shutter is open for some brief period (or
its electronic sensor is reset to black and allowed to accumulate light energy for
some period), and during that time, the arriving light at a pixel sensor may vary.
We can simulate the sensor's response by integrating over time, that is, by picking
rays through random image-plane points as before, but also with associated ran-
dom time values in the interval for which the shutter is open. The time associated
to a ray is used to determine the geometry of the world into which it is shot: A ray
at one moment may hit some object, but a geometrically identical ray a moment
later may miss the object, because it has moved.
Naturally, it's very inefficient to regenerate an entire model for each ray.
Instead, it makes more sense to treat the model as four-dimensional, and work
with four-dimensional bounding volume hierarchies. The sample rays we shoot
are then somewhat axis-aligned (their t -coordinate is constant), allowing the pos-
sibility of some optimization in the BVH structure.
Taking multiple samples in space and time helps generate motion blur; other
phenomena can also be generated by considering larger sampling domains. For
instance, we can change from a pinhole camera to a lens camera by tracing rays
from each pixel to many points of a lens, and then combining these samples. With
a good lens-and-aperture model, we can simulate effects like focus, chromatic
aberration, and lens flare. All that's required is lots and lots of samples and a
strategy for combining them.
When we sample rays passing through the points of a pixel square with a
uniform distribution, we get to estimate the pixel-sensor response with a Monte
Carlo integration. We showed in Chapter 31 that the variance of the estimate falls
off like 1
N , where N is the number of samples, assuming that the samples are
independent and identically distributed. One of the reasons for the inverse-linear
falloff is that when we draw many samples independently, they will tend to fall
into clusters, that is, it's increasingly likely that some pair of samples are quite
close to each other, or even groups of three or four or more. It's natural to think that
if we chose our samples so that no two were too close, we'd get “better coverage”
and therefore a better estimate of the integral. This conjecture is correct.
A simple implementation, the most basic form of stratified samp lin g, divides
the pixel square into a k
/
N , and then
chooses one sample uniformly at random from each smaller square. With this
strategy, the variance falls off like 1
×
k grid of smaller squares, where k
N 2 , which is an enormous improvement.
/
Inline Exercise 32.9: Suppose that you have 25 samples to use at one pixel.
You can
(a) distribute them in a 5
5grid,
(b) distribute them uniformly and independently, or
(c) use the stratified sampling strategy just described, dividing the pixel square
into small squares and choosing one sample per small square. We've said that
choice (c) is better than choice (b), but even with choice (c), we can get pairs
of samples (in adjacent small squares) that are very close to each other. Does
this mean that choice (a) is better?
×
Regardless of what approach you take to generating samples, it's worth think-
ing about the result you'll get when the function you're integrating has a sharp
edge, such as the light reflected by adjacent squares of a chessboard—one (white)
square reflects well, the adjacent (black) square does not. If the edge between
 
 
Search WWH ::




Custom Search