Graphics Reference
In-Depth Information
keeping the mean estimate correct (or at least consistent, i.e., approaching the
correct answer as the number of samples is increased). After the most obvious
optimizations, however, this leads to diminishing returns. If you use your “domain
knowledge” to say “most rendering happens in scenes where the lighting doesn't
vary very fast,” you'll soon find yourself needing to render a picture of the night
sky, where essentially all lighting changes are discontinuities rather than gradual
gradients.
One of the most promising recent developments in Monte Carlo rendering is
to use the gathered samples in a different way. Rather than computing an average
of samples, or a weighted average, we can treat the samples we've gathered as
providing information about the function that they're sampling. Let's begin with
a very simple example: Suppose we tell you we have a function on the interval
[ 0, 1 ] and that it's of the form f ( x )= ax + b for some values a and b (but you
don't know a and b ). It's easy to show that the average value of f on the unit inter-
val is ( a
2 )+ b . Now let's suppose we ask you to estimate that average using a
Monte Carlo integral. You might take ten or 20 samples, average them together,
and declare that to be an estimate of the average value of f ; this is exactly analo-
gous to what we've been doing in all our rendering so far. But suppose that you
looked more carefully at your samples, and for each one, you know both x and
f ( x ) . For instance, maybe the first sample is ( 0.1, 7 ) and the second is ( 0.3, 8 ) .
From these two samples alone, you can determine that a = 5 and b = 6.5, so the
average value is 9. From just two samples, we've generated a perfect estimate of
the average. Of course, we were only able to do so because we knew something
about the x -to- f ( x ) relationship.
This idea has been applied to rendering by Sen and Darabi [SD11]. They posit
that the sample values for a particular pixel bear some functional relation to the
random values used in generating the samples. For instance, the sample value
might be a simple function of the displacement from the center of the pixel or
of the time value of the sample in a motion-blur rendering in which we have to
integrate over a small time interval. Because we use random numbers to select
the ray (or the time), we get random variations in the resultant samples. Sen and
Darabi estimate the relationship of the sample values to the random values used
to generate the samples. Their estimate of the relationship is not as simple as the
ax + b example above; indeed, they estimate statistical properties of the relation-
ship rather than any exact parameters. From this, they distinguish variation due
to position (which they regard as the underlying thing we're trying to estimate)
from the variation due to other injected randomness, and then use this to better
guess the pixel values, in a process they call random parametric filtering (RPF).
Figure 32.20 shows an example of the results.
In this chapter, we've developed two renderers, but they are by no means state
of the art. The topic by Pharr and Humphreys [PH10] (which is nearly as large as
this topic) discusses physically based rendering in great detail, and is a fine choice
for those who want to study rendering more deeply. The SIGGRAPH proceedings,
other issues of the ACM Transactions on Graphics, and the proceedings of the
Eurographics Symposium on Rendering give the student the opportunity to see
how the ideas in this chapter originally developed, and which avenues of research
proved to be dead ends and which have stood the test of time.
/
 
Search WWH ::




Custom Search