Biomedical Engineering Reference
In-Depth Information
3.5. Filtering and State Estimation
Suppose we have a state-space model for our time series, and some observa-
tions y , can we find the state x ? This is the problem of filtering or state estima-
tion . Clearly, it is not the same as the problem of finding a model in the first
place, but it is closely related, and also a problem in statistical inference.
In this context, a filter is a function which provides an estimate
ˆ of x t on
t
the basis of observations up to and including 10 time t :
ˆ = f ( y 0 t ). A filter is re-
cursive 11 if it estimates the state at t on the basis of its estimate at t - 1 and the
new observation:
t
t x ˆ , y t ). Recursive filters are especially suited to online
use, since one does not need to retain the complete sequence of previous obser-
vations, merely the most recent estimate of the state. As with prediction in gen-
eral, filters can be designed to provide either point estimates of the state, or
distributional estimates. Ideally, in the latter case, we would get the conditional
distribution, Pr( X t = x | Y 1 t = y 1 t ), and in the former case the conditional expecta-
tion, , x x Pr( X t = x | Y 1 t = y 1 t ) dx .
Given the frequency with which the problem of state estimation shows up in
different disciplines, and its general importance when it does appear, much
thought has been devoted to it over many years. The problem of optimal linear
filters for stationary processes was solved independently by two of the "grandfa-
thers" of complex systems science, Norbert Wiener and A.N. Kolmogorov, dur-
ing the Second World War (78,79). In the 1960s, Kalman and Bucy (80-82)
solved the problem of optimal recursive filtering, assuming linear dynamics,
linear observations and additive noise. In the resulting Kalman filter , the new
estimate of the state is a weighted combination of the old state, extrapolated
forward, and the state that would be inferred from the new observation alone.
The requirement of linear dynamics can be relaxed slightly with what's called
the "extended Kalman filter," essentially by linearizing the dynamics around the
current estimated state.
Nonlinear solutions go back to pioneering work of Stratonovich (83) and
Kushner (84) in the later 1960s, who gave optimal, recursive solutions. Unlike
the Wiener or Kalman filters, which give point estimates, the Stratonovich-
Kushner approach calculates the complete conditional distribution of the state;
point estimates take the form of the mean or the most probable state (85). In
most circumstances, the strictly optimal filter is hopelessly impractical numeri-
cally. Modern developments, however, have opened up some very important
lines of approach to practical nonlinear filters (86), including approaches that
exploit the geometry of the nonlinear dynamics (87,88), as well as more mun-
dane methods that yield tractable numerical approximations to the optimal filters
(89,90). Noise reduction methods (§3.4) and hidden Markov models (§3.6) can
also be regarded as nonlinear filters.
ˆ = f (
t
Search WWH ::




Custom Search