Game Development Reference
In-Depth Information
time before illuminating the pixel being rendered. This is an extremely
di cult problem. A first important step to making it tractable is to break
up the surfaces in the scene into discrete patches or sample points. But
even with a relatively modest number of patches, we still have to determine
which patches can “see” each other and have a conduit of radiance, and
which cannot see each other and do not exchange radiance. Then we must
solve for the balance of light in the rendering equation. Furthermore, when
any object moves, it can potentially alter which patches can see which. In
other words, practically any change will alter the distribution of light in
the entire scene.
However, it is usually the case that certain lights and geometry in the
scene are not moving. In this case, we can perform more detailed light-
ing calculations (solve the rendering equation more fully), and then use
those results, ignoring any error that results due to the difference in the
current lighting configuration and the one that was used during the o ine
calculations. Let's consider several examples of this basic principle.
One technique is lightmapping. In this case, an extra UV channel is
used to arrange the polygons of the scene into a special texture map that
contains precalculated lighting information. This process of finding a good
way to arrange the polygons within the texture map is often called atlas-
ing. In this case, the discrete “patches” that we mentioned earlier are the
lightmap texels. Lightmapping works well on large flat surfaces, such as
floors and ceilings, which are relatively easy to arrange within the lightmap
effectively. But more dense meshes, such as staircases, statues, machinery,
and trees, which have much more complicated topology, are not so easily
atlased. Luckily, we can just as easily store precomputed lighting values in
the vertices, which often works better for relatively dense meshes.
What exactly is the precomputed information that is stored in lightmaps
(or vertices)? Essentially, we store incident illumination, but there are many
options. One option is the number of samples per patch. If we have only
a single lightmap or vertex color, then we cannot account for the direc-
tional distribution of this incident illumination and must simply use the
sum over the entire hemisphere. (As we have shown in Section 10.1.3, this
“directionless” quantity, the incident radiant power per unit area, is prop-
erly known as radiosity, and for historical reasons algorithms for calculating
lightmaps are sometimes confusingly known as radiosity techniques, even
if the lightmaps include a directional component.) If we can afford more
than one lightmap or vertex color, then we can more accurately capture
the distribution. This directional information is then projected onto a par-
ticular basis. We might have each basis correspond to a single direction.
A technique known as spherical harmonics [44, 64] uses sinusoidal basis
functions similar to 2D Fourier techniques. The point in any case is that
the directional distribution of incident light does matter, but when saving
Search WWH ::




Custom Search