Graphics Reference
In-Depth Information
k = length( s )-2.
p = probability for s // product of edge probabilities
T = term associated to subscript sequence s
11
12
13
14
15
16
(
1
)
, s
(
2
)
,
...
, s
(
k
)
S s ( 1 )
+= T / p ; // increment the entry of S named by s ( 1 )
( S 1 / N , S 2 / N )
return
Under very weak hypotheses on the matrix P , this algorithm provides a con-
sistent estimator of x . That is to say, as N
N will
converge to the values of x 1 and x 2 . How fast will they converge? That depends on
the matrix P : For some choices of P , the estimator will have low variance, even for
small N ; for others, it will have large variance. You, as the user of this estimator,
get to choose P .
The following exercises give you a chance to think about “designing” P effec-
tively. While they may seem far removed from rendering, the ideas you get from
these exercises will help you understand the approaches taken in path tracing,
which we'll discuss next.
→∞
, the values of S 1 /
N and S 2 /
Inline Exercise 31.7: (a) Suppose you know that m 12 = 0, and you're choosing
the matrix P in hopes of getting rapid convergence (i.e., low variance in the
estimator) of the answer. Is there a reason for picking p 12 = 0? Is there a
reason for picking p 12
.
(b) The transitions represented by p 01 and p 02 tell us how often (i.e., what
fraction of the time) we're getting a new estimate of x 1 and how often we're
getting a new estimate of x 2 . What are reasonable values for these, and why?
How would you choose them if you wanted to only estimate x 1 ?
= 0? Experiment with M = 2
0
1
4
0
Inline Exercise 31.8: Write a program to compute the vector x as in Inline
Exercise 31.7, and experiment with different transition matrices P to see how
they affect convergence. Hint: Experiment with the case where M is a diago-
nal matrix to see whether you can notice any patterns, and be sure to use only
matrices M whose entries are non-negative and whose eigenvalues have magni-
tudes less than one. (For complex eigenvalues a + b i , this means a 2 + b 2
<
1.)
We advise that you spend the time doing the inline exercise above, prefer-
ably in some language like Matlab or Octave, where such programs are easy to
write. Merely debugging your programwill give you insight into the method we've
described.
The “weak conditions” mentioned above are these: First, if m ij >
0, then
p ij >
0. Together these ensure that any nonzero
term in our infinite sum ends up being selected with nonzero probability. The dual
of these conditions is also relevant: If b i = 0, then p i 3 should be set to zero as well
so that we never end up working with a term whose last factor is zero. Similarly,
if m ij = 0, we save ourselves some work by picking p ij = 0.
The corresponding ideas, when we apply what we've learned to study light
transport, are that (1) we must never ignore any light source (the condition on b i
and p i 3 ) and (2) we never want to select a ray direction for which the BSDF is
zero, if we can avoid it (the condition on p ij being zero).
0. Second, if b i
= 0, then p i 3 >
 
 
Search WWH ::




Custom Search