Digital Signal Processing Reference
In-Depth Information
The second and the third steps are skipped because u only appears in the measure-
ment equation. The computation of the weights proceeds by
w ( m ) ( n ) / w ( m ) ( n 1) N ( m
( m )
y
( m )
y
( n ), S
( n ))
where
( m )
( m ) ( n )) þ u
( m )
y
m
( n ) ¼ g 2 (x
( n )
( m )
u
( n ) ¼ C
( m )
y
S
( n 1) þ C v 2 :
In the last step we update the estimates of the biases and their covariances by
( m )
( m )
( m )
u
( n ) ¼ u
( m ) ( n )) u
( m ) ( n )(y( n ) g 2 (x
( n 1) þ K
( n 1))
( n 1) þ C v 2 ) 1
( m )
u
( m )
u
( m ) ( n ) ¼ C
( n 1)( C
K
( m )
u
( m )
u
C
( m ) ( n )) C
( n ) ¼ (I K
( n ) :
5.8 PREDICTION
Recall that the prediction problem revolves around the estimation of the predictive
density f (x( nþ k ) y(1 : n )), where k . 0. The prediction of the state is important in
many applications. One of them is for model selection, where from a set of models
M l , l ¼ 1, 2, ... , L , one has to choose the best model according to a given criterion,
for example the one based on MAP [9]. Then, one has to work with predictive
probability distributions of the form f ( y( n þ 1) y( n ), M l ), where
ð f ( y( 1) j x( 1), M l )
f ( y( 1) j y(1 : n ), M l ) ¼
f (x( 1) j y(1 : n ), M l ) d x( 1)
where the second factor of the integrand is the predictive density of the state. The
above integral often cannot be solved analytically, which is why we have interest in
the prediction problem.
First we address the case k ¼ 1, that is, the approximation of the predictive density
f (x( n þ 1) j y(1 : n )). Theoretically, we can obtain it from the filtering PDF f (x( n þ 1) j
y(1 : n )) by using the following integral
ð f (x( 1) j x( n )) f (x( n ) j y(1 : n )) d x( n ) :
f (x( 1) j y(1 : n )) ¼
(5 : 33)
 
Search WWH ::




Custom Search