Graphics Reference
In-Depth Information
reflectance equation). But further factorization in the form of texture mapping,
with parameters that can range from color to, say, the roughness parameter in some
scattering model, allows further simplification. Appearance modeling is the craft
of making compact representations for lots of materials. Given the messiness of
scattering described in this chapter's introduction, it should be no surprise that no
general theory of appearance modeling has yet emerged, despite some substantial
successes [GTR + 06] [DRS08].
We briefly discussed volumetric scattering, with an implicit assumption that
the distribution of particles in the scattering medium was uniform random. In
cases where the distribution has some structure (e.g., the rings of Saturn), more
sophisticated methods are called for. These were pioneered by Blinn [Bli82b],
and advanced by Kajiya and von Herzen [KVH84], Miller [Mil88], and Kajiya
and Kay [KK89], who introduced the notion of texels —three-dimensional arrays
of parameters approximating the visual properties of microsurfaces like hair or
fur—as a way to represent scattering of light from structured volumes of scat-
terers (see Figure 27.20). The complexities of that work demonstrate once again
the point we made at the start of this chapter: Scattering tends to be messy and
complex.
Is scattering too complex? If, in the course of rendering, we need to compute
the light leaving some object but not going directly to the eye, it's possible in
many cases to use a simplified proxy for scattering: We can't really tell whether
light has been scattered from a furry teddy bear, or from a brown paper sack of
about the same shape. There are exceptions, of course. Light scattered from a
crystal chandelier produces highlights all around a room; replacing the crystals
with diffusely scattering reflectors would not be the same at all. Even so, much of
the effect of scattering—the complex appearance of the teddy bear, for instance—
disappears after one bounce. It would be nice to avoid all the extra work in these
cases.
In our discussion of the Torrance-Sparrow and Cook-Torrance models, scat-
tering involves light inter-reflecting among multiple surfaces, resulting in shad-
owed and masked parts. This is exactly the same behavior we'll see in studying
global illumination algorithms, in which the geometry of a scene causes multiply
reflected light to reach some places and not others. For real-time “solutions” (i.e.,
some games as of 2013), it turns out that we can approximate the effect of these
complex global-illumination algorithms and replace them with the idea of ambi-
ent occlusion [Lan02], in which we make things darker when their surroundings
are locally more convex, by setting an ambient term that's proportional to how
much of the far field you can see locally. This creates higher-frequency intensity
gradients than you get with a 1
Figure 27.20: A teddy bear ren-
dered by Kajiya and Kay's texel-
rendering algorithm. (Courtesy
of Jim Kajiya. ©1989 ACM, Inc.
Reprinted by permission.)
r 2 falloff in light intensity, makes corners dark,
and highlights concavities and convexities to give the viewer a clue about material
smoothness at a relatively large scale, since shadows are an important proxim-
ity cue.
Finally, it's worth standing back and looking at the microfacet models in a
larger context. When we want to render a scene faithfully, we have to take into
account how light from the luminaires scatters from each surface onto each other
surface, and the resultant complex distribution of light energy reaching the eye or
camera. The interconnectedness of all objects in the scene (or at least all mutually
visible objects) leads to algorithmic complexity. Now look at microfacet models:
They're doing the same thing! Light arriving at the surface reaches only part of
one microfacet because another shadows it, and light scattered from that micro-
/
 
Search WWH ::




Custom Search