Graphics Reference
In-Depth Information
We're going to describe two approaches to finding x 1 : one nonrecursive and
the other recursive. The first is somewhat more complex, and you can skip it if
you'd like. The reasons for including it are as follows.
• It provides the justification for the second method.
• The particular formulation is one that you will see often in modern render-
ing research papers.
31.18.1 The Markov Chain Approach
If we attempt to find the value of x 1 using Equation 31.79, we'll need to sum up
infinitely many terms. The first few are
b 1 ,
(31.81)
m 11 b 1 + m 12 b 2 , and
(31.82)
( m 11 m 11 + m 12 m 21 ) b 1 +( m 11 m 12 + m 12 m 22 ) b 2 .
(31.83)
The last term, expanded out, is
m 11 m 11 b 1 + m 12 m 21 b 1 + m 11 m 12 b 2 + m 12 m 22 b 2 .
(31.84)
From Section 31.5.1, we have a method for estimating such an infinite sum:
Select a random term a i in the series (according to some probability distribution p
on the positive integers); then a i /
p ( i ) is an estimator for the sum. In this applica-
tion, we'll treat each individual summand in Equations 31.81-31.83 as a term, so
the first term is b 1 , the second is m 11 b 1 , and so on.
If you look at the sequence of subscripts in any term—say, m 12 m 21 b 1 —you'll
notice the following.
• The subscripts start at 1, because we're computing the first entry of the
answer, x 1 .
• Each subsequent subscript is repeated twice (in this case, two more twos,
then two more ones). This repetition is a consequence of the definition of
matrix multiplication.
Every such sequence occurs.
Thus, to pick a term at random, we need only to pick a “random” finite
sequence of subscripts starting at 1. We'll do so by a slightly indirect method.
Figure 31.22 shows a probabilistic finite-state automaton (FSA). The edge
from node i to node j is labeled with a probability p ij , which you can read as the
probability that if we're at node i at some moment, we'll next move to node j .(Ina
probabilistic FSA of this sort, the path you take through the FSA is not determined
by an input string, but instead by random choices at each node.)
By starting at node 0 and making random transitions to other states, using the
probabilities p ij that label the edges of the graph, we'll eventually arrive at state 3,
where we'll stop. The sequence we generate will start at 0 and end at 3. In between
will be a sequence of 1s and 2s that we'll use as our index sequence.
An FSA like this is a visual representation of a Markov chain —a sequence of
states with transition probabilities, with the constraint that the probability of being
at some state i at step k + 1 depends only on where you were at step k , and not on
any history of your prior steps. Such a Markov chain can be completely described
by the matrix P of transition probabilities p ij , which has the Markov property:
 
 
Search WWH ::




Custom Search