Graphics Reference
In-Depth Information
the time a ray hitting the sphere, for instance, is reflected in a specific direction,
and just as in ray tracing, nearby rays are refracted to nearby rays. But there's
also a 20% chance of absorption. If we trace, say, ten primary rays per pixel, it's
reasonable to expect seven, eight, nine, or ten of these rays to be reflected (i.e.,
from zero to four of them to be absorbed). That'll lead to adjacent pixels having
quite different radiance sums. To reduce this variance between adjacent pixels, we
need to send quite a lot of primary rays (perhaps hundreds or thousands per pixel).
You can even use the notion of confidence intervals from statistics to determine a
number of samples so large that the fraction of absorbed rays is very nearly the
absorption probability so that interpixel variation is small enough to be beneath
the threshold of observation. In fact, Figure 32.10 was rendered with 100 primary
rays per pixel, and despite this, the reflection of the floor in the red sphere appears
slightly speckled. Figure 32.11 shows the speckle more dramatically.
Figure 32.11: Path tracing with
ten rays per pixel.
32.6 Photon Mapping
Let's now move on to a basic implementation of photon mapping. Recall that the
main idea in photon mapping is to estimate the indirect light scattered from dif-
fuse surfaces by shooting photons 3 from the luminaires into the scene, recording
where they arrive (and from what direction), and then reflecting them onward to
be further recorded in subsequent bounces, eventually attenuated by absorption or
by having the recursion depth-limited. When it comes time to estimate scattering
at a point P of a diffuse surface, we search for nearby photons and use them as
estimates of the arriving light at P , which we then push through the reflectance
process to estimate the light leaving P .
Not surprisingly, much of the technology used in the path-tracing code can
be reused for photon mapping. In our implementation, we have built a photon-
map data structure based on a hash grid (see Chapter 37); as we mentioned in
our discussion of photon mapping, any spatial data structure that supports rapid
insertion and rapid neighborhood queries can be used instead.
We've defined two rather similar classes, EPhoton and IPhoton , to represent
photons as they are emitted and when they arrive; the “I” in IPhoton stands for
“incoming.” An EPhoton has a position from which it was emitted, and a direction
of propagation, which are stored together in a propagation ray, and a power , rep-
resenting the photon's power in each of three spectral bands. An IPhoton , by con-
trast, has a position at which it arrived, a direction to the photon source from that
position, and a power. Making distinct classes helps us keep separate the two dis-
tinct ways in which the term “photon” is used. In our implementation, an EPhoton
is emitted, and its travels through the scene result in one or more IPhoton s being
stored in the photon map.
The basic structure of the code is to first build the photon map, and then
render the scene using it. Listing 32.11 shows the building of the photon map:
We construct an array ephotons of photons to be emitted, and then emit each into
the scene to generate an array iphotons of incoming photons, and store these in
the map m_photonMap .
3. Recall that a “photon” in photon mapping represents a bit of power emitted by the
light, typically representing many physical photons.
 
 
Search WWH ::




Custom Search