Graphics Reference
In-Depth Information
objects, this direct light contains many of the discontinuities in the light field (cor-
responding to silhouettes, contours, and hard shadows).
The partitioning by Heckbert classes is useful, but rather coarse: Paths with
multiple specular bounces may have high “throughput,” but this only matters if
they start at light sources. There may someday be other ways of classifying paths
that allows us to delimit the “bright parts” of path space more efficiently.
As we consider computing the reflectance integral at some point, it's
reasonable to ask, “How much do the variations in the field radiance matter?”
If the surface is Lambertian, the answer is, “Generally not too much.” If it's shiny,
then variations in field radiance matter a lot. But having computed the reflectance
integral to produce surface radiance, which may have considerable variation with
respect to outgoing direction, we can ask, “When this arrives at another surface,
how will that variation appear?” If we look at such a surface up close, moving our
eyes a few centimeters may yield substantial variation in the appearance of the
surface. But if we look at the same surface from a kilometer away, we'll have to
move our eyes dozens of meters to see the same variation. This dispersal of high-
frequency content in the light field (and other related phenomena) is discussed by
Durand et al. in a thought-provoking paper [DHS + 05] that ties together the fre-
quency content of the radiance field, both in spatial and angular components, with
ideas about appropriate rates for sampling in various rendering algorithms.
We've treated rendering as a problem of simulating sensor response to a radi-
ance field, with the implicit goal of getting the “right” sensor value at each pixel.
This may not always be the right goal. If the image is for human consumption,
it's worth considering the end-to-end nature of the process, from model all the
way to percept. Humans are notorious, for instance, for their inability to detect
absolute levels of almost any sensation, but they are generally quite sensitive to
variation. We can't tell how bright something is, but we can reliably say that it's a
little brighter than another thing that's near it, for instance. This means that if you
had a choice between a perfect image, corrupted by noise so that a typical pixel's
value was shifted by, say, 5%, and the same perfect image, with every pixel's value
multiplied by 1.1, you'd probably prefer the second, even though the first is closer
to the perfect image in an L 2 sense.
Indeed, the human eye, while sensitive to absolute brightness, is much more
sensitive to contrast. It might make sense, in the future, to try to render not the
image itself, but rather its gradients, perhaps along with precise image values at a
few points. The “final step” in such a rendering scheme would be to integrate the
gradients to get an intensity field, subject to the constraints presented by the known
values; such a constrained optimization might better capture the human notion
of correctness of the image. We are not proposing this as a research direction,
but rather to get you thinking about the big picture, and what aspects of that big
picture current methods fail to address.
We've concentrated on the operator-theoretic solution of the rendering equa-
tion, but we've by no means exhausted these approaches. The solution says that
( I T ) 1 e =( I + T + T 2 +
) e , where e describes the luminaires in the scene.
If we slightly rewrite the right-hand side, we can discover other approaches based
on this solution:
...
T ) 1 e =( I + T + T 2 +
( I
...
) e ,
(31.101)
= e +( T + T 2 +
...
) e , and
(31.102)
= e +( I + T +
...
) T e .
(31.103)
Search WWH ::




Custom Search