Graphics Reference
In-Depth Information
surface,” or “the BRDF of the surface” (considered as a function of position on
the surface), or “the color of the surface,” etc.
In fact, many of the techniques we've encountered—BRDFs, normal maps,
displacement maps—provide representations of geometry at different scales. The
BRDF (at least in the Torrance-Sparrow-Cook formulation) is a representation
of how the microfacet slope distribution affects the reflection of light from the
surface. We could model all those microfacets, but the space and time overhead
would be prohibitive. More important, the result of sampling such a representation
(e.g., ray-tracing it) would contain horrible aliases: A typical ray hits one partic-
ular microfacet and reflects specularly, rather than dispersing as we'd expect for
a diffuse reflection. Displacement maps and normal maps represent surface varia-
tion even for surfaces that are, at a gross scale, represented by just a few polygons.
Because of their formulation as maps (i.e., functions on the plane), they can be
filtered to reduce aliasing artifacts, using MIP mapping, for example.
Fortunately, these techniques also constitute a hierarchy of sorts: As you sim-
plify one representation, you can push information into another. A crinkled piece
of aluminum foil, for instance, can be modeled as a complex mesh with a very sim-
ple (specular) BRDF, if it's seen close-up. As it recedes into the distance, we can
replace the complicated geometry with a simpler planar polygon, but we can rep-
resent the “crinkliness” by a normal map and/or displacement map. As it recedes
farther into the distance, and variations in the normal map happen at the subpixel
scale, we can use a single normal vector, but change the BRDF to be more glossy
than specular, aggregating the many individual specular reflections into a diffuse
BRDF. Many of these ideas were present (at least in a nascent form) in Whitted's
1986 paper on procedural modeling [AGW86].
The correspondence with the sampling/filtering ideas is more than mere
analogy: In rendering, we're trying to estimate various integrals, typically with
stochastic methods that use just a few samples; from these samples, we implicitly
reconstruct a function in the course of computing its integral. If the function is
ill-represented by the samples, aliasing occurs. In one-sample-per-pixel ray trac-
ing, for instance, any model variation that occurs at a level that's smaller than two
pixels on the screen must either (a) be filtered out, or (b) appear as aliases.
In some sense, these observations give an abstract recipe for how you ought
to do graphics: You decide which radiance samples you'll need in order to repre-
sent the image properly, and then you examine the light field itself to determine
whether taking those samples will generate aliases. If so, you determine what
variation needs to be removed from the light field; since the light field itself is
determined by the rendering equation, you can then ask, “What simplification of
the illumination or geometry of this scene would remove those problems from the
light field?” and you remake the model accordingly. When you set about rendering
this model, you get the best possible picture.
This “recipe” is an idealized one for several reasons. First, it's not obvious
how to simplify geometry and illumination to remove “only the bad stuff” from
the light field; indeed, this may be impossible. Second, determining the “bad stuff”
in the light field may require that you solve the rendering equation with the full
model as a first step, which returns you to the original question. A compromise
position is that if we filter the light sources to not have any high-frequency vari-
ations, and we smooth out the geometry so that it doesn't have any sharp corners
(which lead to high-frequency variations in reflected light), then the “product” of
light and geometry represented by the rendering equation will end up without too
 
Search WWH ::




Custom Search