Information Technology Reference
In-Depth Information
consequence, if the observation o t corresponds to a low probability transition in x t− 1 ,
drawing particles from P ( X t |
x t− 1 ) moves all particles in the “wrong” direction (i.e.,
contrary to the information provided by the observation), and resampling will have a
hard time to move them back to the interesting states.
The solution is to draw new particle positions from an importance density function
that takes the observation o t into account. It has been shown in [3] that the target distri-
bution P ( X t |
x t− 1 ,o t ) minimizes the variance of importance weights w t conditioned
on x 1: t− 1 and o 1: t . In practice, it is often difficult to sample from P ( X t |
x t− 1 ,o t ).For-
tunately, in our case, it is possible to obtain P ( X t |
X t− 1 ,o t ) in closed form [4], which
leads to an optimal filter.
5.4
The Forward Step
The loop in Lines 3-29 of the RVPF algorithm estimates the state of the system af-
ter each observation. Lines 5-9 handle the regular case in which an observation o t is
available. Lines 11-23 handle gaps.
Handling Program Events. If an observation o t is available, each particle currently in
( x t− 1 ,s t− 1 ) is moved first to ( x t ,s t− 1 ) by sampling from P ( X t |
x t− 1 ,o t ) in Line 6.
In next step, the particle is moved to ( x t ,s t ) by sampling from P ( S t |
s t− 1 ,o t ).Note
that P ( S t |
s t− 1 ,o t ) is a conditional probability table which corresponds to the DFA
transition function δ . Therefore, the sampling step in Line 7 is guaranteed to return
s t = δ ( s t− 1 ,o t ). Subsequently, in Line 8, the importance weight of each particle is up-
dated by multiplying its current weight with the value from the precomputed matrix
P ( O t |
X t− 1 ),where O t = o t and X t− 1 = x t . If the number of particles with signif-
icant weights becomes too low (which is estimated in Lines 26-28), the particle posi-
tions are resampled in Line 28, based on the weight distribution w . This concentrates
the particles in the more probable regions of the state space.
Handling Gaps. Upon encountering a gap of length in the trace, the RVPF algo-
rithm moves each particle, from current state ( x t− 1 ,s t− 1 ), steps forward to state
( x t + 1 ,s t + 1 ), following the probability distributions in the HMM. A single step
consists of: sampling an observation o t from P ( O t |
x t− 1 ) in Line 16; sampling next
HMM state x t from P ( X t |
x t− 1 ,o t ) in Line 17; sampling next DFA state s t from
P ( S t |
s t− 1 ,o t ) in Line 18 (which, again, corresponds to advancing the DFA using
s t = δ ( s t− 1 ,o t )); and updating the particle weight on Line 19 using the same equation
as on Line 8.
Handling Peek Events. Peek events help correct the movement errors introduced by
using the HMM model during gaps. After each gap, a peek operation inspects a variable
or a set of variables in the program state and returns an observation q t . On Line 23, each
particle i is weighted by P ( q t |
s ( i t ), the probability DFA state s ( i t assigns evidence q t .
Particles with impossible DFA states are assigned a weight of zero, and particles with
possible DFA states are assigned a weight of 1. The resampling in Line 28 redistributes
all particles across the possible DFA states.
 
Search WWH ::




Custom Search