Graphics Reference
In-Depth Information
in a simple fashion 100% of the time: Because the mirror has ( 0, 1, 1 ) as its nor-
mal, an incoming ray in direction ( x , y , z ) becomes an outgoing ray in direction
( x ,
y ) . It's easy to mentally work through any such interaction. And if we
choose a pixel at the center of the scene, then all x -coordinates will be very near 0
and can be neglected.
Doubtless you'll develop your own approaches to debugging, but because ren-
dering code is often closely tied to particular phenomena, an approach in which
it's easy to turn on or off certain parts of computed radiance, and to reason about
what remains, makes for much easier debugging.
z ,
32.9 Discussion and Further Reading
As we promised at the start of this chapter, we've described basic implementa-
tions of a path tracer and a photon-map/ray-tracing hybrid, showing some design
choices and pitfalls along the way. Each renderer produces an array containing
radiance values, the value at pixel ( x , y ) being an average of the radiance values
for eye rays passing through a square centered at ( x , y ) , whose side length is one
interpixel spacing. This models a perfect-square radiance sensor, which is a fair
approximation of the CCD cells of a typical digital camera. The approximation is
only “fair” because at low radiance values, noise in the CCD system may dom-
inate, and for larger radiance values, the response of the sensor is nonlinear: It
saturates at some point. And even between these limits, the sensor response isn't
really linear.
What we do with these radiance images depends on our goals. If we want to
build an environment map, then a radiance image is a fine thing to work with. If
we want to display the image on a conventional monitor using a standard image-
display program, we need to convert each radiance value to the sort of value
recorded by an ordinary camera in response to this amount of radiance. As we
discussed in Chapter 28, these values are typically not proportional to radiance. If
the radiance values cover a very wide range, an ordinary camera might truncate the
lowest and highest values. Because we have the raw values, we may be able to do
something more sophisticated, tricking the visual system into perceiving a wider
range of brightness than is actually displayed. This is the area of study called tone
mapping [RPG99, RSSF02, FLW02, MM06], which is an active area of current
research.
Rather than simply storing the average radiance for each location, we could
instead accumulate the samples themselves for later processing, allowing us to
simulate the responses of several different kinds of sensors, for instance, or more
generally, using them as data for a density-estimation problem, the “density”
in this case being the pixel values. Our simple approach of averaging samples
amounts to convolution with a box filter, but other filtering approaches yield bet-
ter results for different applications [MN88]. Not surprisingly, if we know what
filter we'll be using, we can collect samples in a way that lets us best estimate the
convolved value (i.e., we can do importance sampling based on the convolution
filter). In general, sampling and reconstruction should be designed hand in hand
whenever possible.
The notion of taking multiple samples to estimate the sensor response at
a pixel was first extensively developed in Cook's paper on distribution ray
tracing [CPC84]. We've applied it here in its minimal form—uniform sampling
over a square representing the pixel—but for animation, for instance, we also
 
 
Search WWH ::




Custom Search