Graphics Reference
In-Depth Information
field radiance at P either by looking at its nearest neighbors, as we've described, or
by shooting lots of rays from P to hit other points, Q i ( i = 1, 2,
) , and then using
nearest-neighbor techniques to estimate the field radiance at each Q i , and the light
reflected back toward P from each Q i . The collection of these gathered lights is
also a valid estimate of the field radiance at P , but is much less likely to exhibit the
discontinuities described above, as any discontinuity is typically averaged with a
great many other continuous functions.
...
31.19.1 Image-Space Photon Mapping
McGuire and Luebke [ML09] have rethought photon mapping for a special case—
point lights and pinhole cameras—by recognizing that in this case, some of
the most expensive operations could be substantially optimized. One of these
operations—the transfer of information from photons in the photon map to pixels
in the image—is highly memory-incoherent in the original photon-mapping algo-
rithm: One must seek through the k -d tree to find nearby photons, and depend-
ing on the memory layout of that tree, this may involve parts of memory distant
from other parts. On the other hand, if every photon, once computed, could make
its contribution to all the relevant pixels (which are naturally close together in
memory), there would be a large improvement. The resultant algorithm is called
image-space photon mapping. This approach harkens back to Appel's notion of
drawing tiny “+” signs on a plotter: These marks were spatially localized, and
hence easy to draw with a plotter. It's also closely related to progressive photon
mapping [HOJ08], another approach that works primarily in image space.
The key insight is that when we ray-cast into the scene to gather light from
photons, adjacent pixels are likely to gather light from the same photons; we could
instead project the photons onto the film plane and add light to all the pixels within
a small neighborhood. There are quite a few subtleties (How large a neighbor-
hood? What about occlusion?), but the algorithm, implemented as a CPU/GPU
hybrid, is much faster than ordinary photon mapping. While the algorithm only
works with point lights and pinhole cameras, the added speed may be sufficient to
justify this limitation in some applications such as video games.
31.20 Discussion and Further Reading
Many of the ideas in this chapter have been implemented in the open-source
Mitsuba renderer [Jak12]. Seeing such an implementation may help you make
these ideas concrete (indeed, we strongly recommend that you look at that ren-
derer), but we also recommend that you first follow the development of the next
chapter, in which some of the practical little secrets of rendering, which clutter up
many renderers, are revealed. This will make looking at Mitsuba far easier.
While much of this chapter has been about simulation of light transport, there
are a few large-scale observations about light in scenes that have crept into the
discussion in disguise. We now revisit these in greater detail.
For instance, in classifying light paths using the Heckbert notation, we effec-
tively partition the space of paths into subspaces, each of which we consider dif-
ferently. We know, for instance, that much of the light in a scene is direct light,
carried along LDE paths, and that in a scene with point lights and hard-edged
 
 
 
Search WWH ::




Custom Search