Database Reference
In-Depth Information
of the Gaussian distribution. This means that to describe our current
belief state, x t , we only need to compute and store the mean vector
and covariance matrix. Rewriting the posterior in Eq. 10.9, we derive
the functional forms of the distributions to show how we end up with a
normally distributed posterior.
p ( x t
|
z 0: t )
p ( z 0: t
|
x t ) p ( x t
|
z 0: t− 1 )
From our model assumptions (Eq. 10.4), we have that p ( z 0: t |
x t )isnor-
mally distributed. The second term is the predicted belief state given
all observations up to the previous time step. This distribution can be
rewritten in terms of the model dynamics and a recursion term as shown
in Eq. 10.8. We denote the posterior parameters of x from the previous
time step as μ t− 1 and Σ t− 1
z 0: t− 1 )=
p ( x t |
p ( x t |
x t− 1 ) p ( x t− 1 |
z 0: t− 1 ) dx t− 1
=
N ( x t ; Ax t− 1 ,Q ) N ( x t− 1 ; μ t− 1 , Σ t− 1 )
( t− 1 ,A Σ t− 1 A T + Q )
=
N
(10.9)
To predict the state at time t , we simply apply the model dynamics
to the estimate of the state at t
1, integrating over all possibilities.
The integration over the previous state is necessary since we are actually
uncertain of the true value of x at any given time and thus must consider
all possibilities. We use the second term, the posterior of x from the
previous time step, weight each guess of the previous state based on our
posterior distribution for x t− 1 . We denote the predicted parameters for
x t as shown in Eq. 10.10 and 10.11.
m t = t− 1
(10.10)
P t = A Σ t− 1 A T + Q
(10.11)
Combining the observation and prior distributions of the belief state
(Eq. 10.4 and 10.8), we can reconstruct the joint distribution over x t
and z t .
( m t
, P t P t H T
)
p ( x t ,z t
|
z t− 1 )=
N
HP t HP t H T + R
Hm t
Search WWH ::




Custom Search