Graphics Reference
In-Depth Information
31.18.1.1 An Alternative Markov Chain Estimator
When we generated the sequence 01223, we used it to compute a single term of
the series, and the associated probability. But there's a slightly different approach,
due to Wasow, in which we can use a k -term sequence to generate k different
estimates for the sum all at once. It's described in detail, and a rigorous proof
of its correctness is given, in Spanier and Gelbard [SG69]. Here we'll give an
informal derivation of the method.
The idea is this: When we had generated the partial sequence 012, there were
several possibilities. We could have generated a 3 next, ending the sequence, or
we could have gone on to generate another 1 or 2. If you imagine walking through
the FSA thousands of times, some fraction of those times your initial sequence
will be 012. If there are 100 such initial sequences, then about 30 of them will be
0123, because p 23 = 0.3 in our particular FSA. About 20 of them will have the
form 0121
. Under the basic scheme,
the 30 terminating sequences (0123) would each have associated probability 0.5
...
, and about 50 will have the form 0122
...
·
0.2
0.3, where we've put parentheses around everything except
the last factor. The associated term in the sum we're estimating is m 12 b 2 , and for
each of those 30 sequences, our estimate of the sum is
·
0.3 =( 0.5
·
0.20 )
·
m 12 b 2
0.3 .
(31.87)
( 0.5
·
0.2 )
·
m 12 b 2
( 0.5
0.3 as their
estimate of the sum. Suppose that instead we made 100 % of the sequences con-
tribute the smaller value
So, among all sequences starting with 012, 30% contribute
·
0.2 ) ·
m 12 b 2
( 0.5 · 0.2 ) (the division by 0.3 has been removed) as part of
their estimate of the sum. The expected value would be the same, because we'd be
averaging either 30 copies of the larger value or 100 copies of the smaller value!
Of course, we apply the same logic to every single initial sequence, and we arrive
at a new version of the algorithm. Before we do so, however, let's look at the value
m 12 b 2
( 0.5 · 0.2 ) above. It arises as
m 12 b 2
( 0.5
1
p 01
m 12
p 12 b 2 .
0.2 ) =
(31.88)
·
More generally, in a longer index sequence, we'll have a first factor of the form
1
p 0 i
for i = 1 or 2, followed by several terms of the form m ij
p ij
, and finally a b k .The
revised algorithm now looks like the code in Listing 31.4.
Listing 31.4: Estimating matrix entries with a Markov chain
and the Wasow estimator.
Input: A 2 × 2 matrix M and a vector b , both with
indices 1 and 2 , and the number N of samples to use.
Output: an estimate of the solution x to x = Mx + b .
1
2
3
4
5
6
7
8
P = 4 × 4 array of transition probabilities, with indices
0,
...
,3 , as in Figure 31.22.
S 1 = S 2 = 0 // sums of samples for the two entries of x .
 
 
Search WWH ::




Custom Search