Digital Signal Processing Reference
In-Depth Information
can be used for the estimation of the desired unknowns. Before the next step n þ 1, the
effective particle size is estimated and resampling is carried out if necessary.
It is important to note that resampling becomes a major obstacle for efficient
implementation of particle filtering algorithms in parallel very large scale integration
(VLSI) hardware devices, because it creates full data dependencies among processing
units [11]. Although some methods have been recently proposed [12], parallelization
of resampling algorithms remains an open area of research.
5.5 SOME PARTICLE FILTERING METHODS
In this section we present three different particle filtering methods. They are also
known as sampling-importance-resampling (SIR), auxiliary particle filtering (APF),
and Gaussian particle filtering (GPF). The common feature of each of these methods
is that at time instant n , they are represented by a discrete random measure given by
x ( n ) ¼ {x
( m ) ( n ) is the m -th particle of the
state vector at time instant n , w ( m ) ( n ) is the weight of that particle, and M is the
number of particles. For each of these filters, we show how this random measure is
obtained from x ( n 2 1) by using the observation vector y( n ).
( m ) ( n ), w ( m ) ( n )} 1 , where, as before, x
5.5.1 SIR particle filtering
The SIR method is the simplest of all of the particle filtering methods. It was proposed
in [31] and was named bootstrap filter. Namely, the SIR method employs the prior
density for drawing particles, which implies that the weights are only proportional
to the likelihood of the drawn particles. 8 We now explain this in more detail.
Recall that if the particles are generated from a density function p (x( n )), and the
weights of the particles in the previous time step were w ( m ) ( n 2 1), upon the reception
of the measurement y( n ), the weights are updated by
( m ) ( n )) f (x
( m ) ( n ) j x
( m ) ( n 1))
f ( y( n ) j x
w ( m ) ( n ) ¼ w ( m ) ( n 1)
p (x
( m ) ( n ) j x
( m ) ( n 1), y(1 : n ))
where w ( m ) ( n ) denotes a nonnormalized weight. If the proposal distribution is equal
to the prior, that is
( m ) ( n 1), y(1 : n )) ¼ f (x( n ) j x
( m ) ( n 1))
p (x( n ) j x
the computation of the weights simplifies to
w ( m ) ( n ) ¼ w ( m ) ( n 1) f ( y( n ) j x
( m ) ( n )) :
8 Here we assume that the weights from the previous time instant are all equal due to resampling.
 
Search WWH ::




Custom Search