Information Technology Reference
In-Depth Information
5.2
Initial Particle Distribution
The function I NITIALIZE P ARTICLE D ISTRIBUTION , which is invoked on Line 2 of the
RVPF algorithm, distributes N p particles in the state space based on the initial proba-
bility distributions P ( X 0 ) and P ( S 0 ) of the HMM and DFA, respectively.
In the code for this function, variable D i,j holds the number of particles in HMM
state x i and DFA state s j . The rationale for using
P ( x ( i 0 )
P ( s ( j 0 )
on Line 3
is to guarantee that every state with a non-zero initial probability will contain at least
one particle. The code in Lines 1-5 is guaranteed to generate at least N p particles. If
the number of generated particles exceeds N p , the number is reduced in Lines 6-9 by
removing individual particles from the richest states.
N p ·
·
Algorithm I NITIALIZE P ARTICLE D ISTRIBUTION
Input : Initial probability distributions P ( X 0 ) and P ( S 0 ) of the HMM and DFA, respectively,
Number of particles N p
Output : Initial positions x 0 , s 0 and weights w 0 of particles
1
for i = 1 to | dom( X 0 ) | do
2
for j = 1 to | dom( S 0 ) | do
N p · P ( x ( i 0 ) · P ( s ( j )
D i,j =
)
3
0
4
end
5
end
while | dom( X 0 ) |
i =1
| dom( S 0 ) |
j =1 D i,j >N p do
7 FIND a,b FOR WHICH D a,b =max( D )
8 D a,b = D a,b 1
9 end
10 n = 1
11
6
for i = 1 to | dom( X 0 ) | do
12
for j = 1 to | dom( S 0 ) | do
13
for k = 1 to D i,j do
( x ( n )
0
,s ( n )
0
,w ( n )
0
14
) = ( x i ,s j , 1 /N p )
15
n = n +1
16
end
17
end
18
end
19
return ( x 0 , s 0 , w 0 )
5.3 Deriving the Optimal Importance Density Function
The simplest form of the SIR particle filter, known as the bootstrap filter ,usesthe
transition prior P ( X t |
x t− 1 ) as the importance density function, i.e., the probability
distribution from which new particle positions are drawn. Subsequent weight calcula-
tions are performed based on the observation probabilities P ( o t |
x t ), and the particles
are then moved to the interesting regions of the state space through resampling. This
approach, however, gives poor results in our setting.
The probability distributions of our learned HMMs often have large transition prob-
abilities of x t− 1 associated with small observation probabilities of x t , and small tran-
sition probabilities of x t− 1 associated with large observation probabilities of x t .Asa
Search WWH ::




Custom Search