Information Technology Reference
In-Depth Information
(0 , ( m ( x n ) τ n ) 1 )
with precision m ( x n ) τ n . Here, we utilise the matching function to blur obser-
vations that are not matched. Given, for example, that x n is matched and so
m ( x n ) = 1, the resulting measurement noise has variance τ n . However, if that
state is not matched, that is if m ( x n ) = 0, then the measurement noise has infi-
nite variance and the associated observation does not contain any information.
The system state ω is modelled by the multivariate Gaussian model ω
The noise n is modelled by a zero-mean Gaussian n ∼N
N ( w , Λ 1 ) centred on w and with precision matrix Λ . Hence, the output υ n is
also Gaussian υ n ∼N ( y n , ( m ( x n ) τ n ) 1 ), and jointly Gaussian with the system
state ω . More details on the random variables, their relations and distributions
can be found in [164, Chap. 5] and [2, Chap. 1].
Comparing the model (5.48) to the previously introduced linear model (5.1), it
can be seen that the system state corresponds to the weight vector, and that the
only difference is the assumption that the measurement noise variance can change
with each observation. Additionally, the Kalman-Bucy system model explicitly
assumes a multivariate Gaussian model for the system state ω , resulting in the
output υ also being modelled by a Gaussian.
The aim of the Kalman filter is to estimate the system state that can sub-
sequently be used to predict the output given a new input. This is achieved by
conditioning a prior ω 0 ∼N
( w 0 , Λ 0 ) on the available observations. As before,
we proceed by assuming that the current model ω N
( w N , Λ N )resultsfrom
incorporating the information of N observations, and we want to add the new
observation ( x N +1 ,y N +1 N +1 ). Later it will be shown how to estimate the noise
precision τ N +1 , but for now we assume that it is part of the observation.
∼N
Covariance Form
As the system state and the observation are jointly Gaussian, the Bayesian
update of the model parameters is given by [2, Chap. 3]
E ω N |
( y N +1 , ( m ( x N +1 ) τ N +1 ) 1 )
w N +1 =
υ N +1 ∼N
( ω N )+cov( ω N N +1 )var( υ N +1 ) 1 ( y N +1 E
=
E
( υ N +1 )) ,
(5.49)
Λ N +1 =cov ω N , ω N |
( y N +1 , ( m ( x N +1 ) τ N +1 ) 1 )
υ N +1 ∼N
cov( ω N N +1 )var( υ N +1 ) 1 cov( υ N +1 , ω N ) . (5.50)
=cov( ω N , ω N )
Evaluating the expectations, variances and covariances
E
( ω N )= w N ,
cov( ω N , ω N )= Λ N ,
cov( υ N +1 , ω N )= x N +1 Λ N
( υ N +1 )= w N x N +1 ,
cov( ω N N +1 )= Λ N x N +1 ,
E
,
var( υ N +1 )= x N +1 Λ N x N +1 +( m ( x N +1 ) τ N +1 ) 1 ,
and substituting them into the Bayesian update results in
 
Search WWH ::




Custom Search