Digital Signal Processing Reference
In-Depth Information
There is another group of approaches for nonlinear / non-Gaussian filtering, which
is based on a different paradigm. These methods exploit the idea of evaluating the
required PDFs over deterministic grids [14, 46, 49, 71]. Namely, the state space is
represented by a fixed set of nodes and values associated with them, which represent
the computed PDFs at the nodes. The final estimates of the densities are obtained
by piecewise linear (first-order spline) interpolations. These methods can work well
for low-dimensional state spaces; otherwise, they become quickly computationally
prohibitive.
The particle filtering methodology 1 is very similar in spirit as the one that exploits
deterministic grids. However, there is a subtle but very important difference between
them. Particle filtering uses random grids, which means that the location of the nodes
vary with time in a random way. In other words, at one time instant the grid is com-
posed of one set of nodes, and at the next time instant, this set of nodes is completely
different. The nodes are called particles, and they have assigned weights, which can be
interpreted as probability masses. The particles and the weights form a discrete
random measure, 2 which is used to approximate the densities of interest. In the
generation of particles, each particle has a “parent”, and the parent its own parent
and so on. Such sequence of particles is called a particle stream, and it represents
one possible evolution of the state with time. Filters based on this methodology are
called particle filters (PFs).
A main challenge in implementing PFs is to place the nodes of the grids (i.e., gen-
erate particles) in regions of the state space over which the densities carry significant
probability masses and to avoid placing nodes in parts of the state space with negli-
gible probability masses. To that end, one exploits concepts from statistics known
as importance sampling [48] and sampling-importance-resampling [68], which are
explained in the sequel. For the computation of the particle weights, one follows
the Bayesian theory. The sequential processing amounts to recursive updating of
the discrete random measure with the arrival of new observations, where the updates
correspond to the generation of new particles and computation of their weights. Thus,
one can view particle filtering as a method that approximates evolving densities by
generating streams of particles and assigning weights to them.
The roots of particle filtering were established about 50 years ago with the methods
for estimating the mean squared extension of a self-avoiding random walk on lattice
spaces [33, 67]. One of the first applications of the methodology was on the simulation
of chain polymers. The control community produced interesting work on sequential
Monte Carlo integration methods in the 1960s and 1970s, for example [1, 34]. The
popularity of particle filtering in the last 15 years was triggered by [31], where
the sampling-importance resampling filter was presented and applied to tracking
problems. The timing of [31] was perfect because it came during a period when com-
puting power started to become widely available. Ever since, the amount of work on
particle filtering has proliferated and many important advances have been made.
1 In the literature, particle filtering is also known as sequential Monte Carlo methodology [26].
2 The random measure is basically a probability mass function.
 
Search WWH ::




Custom Search