Digital Signal Processing Reference
In-Depth Information
observable, which is dependant on the hidden weight (state) vector w n . From the
predictor-corrector property of the Kalman filter and its variants and extensions, we
find that examination of the block diagram of Figure 6.5 leads us to make the following
insightful statement: The multilayer perceptron, undergoing training, performs the role
of the predictor, and the EKF, providing the supervision, performs the role of the cor-
rector. Thus, whereas in traditional applications of the Kalman filter for sequential
state estimation, the roles of the predictor and corrector are embodied in the Kalman
filter itself, in supervised training applications these two roles are split between the
MLP and the EKF. Such a split of responsibilities in supervised learning is in perfect
accord with the way in which the input and the desired response of the training sample
are split in Figure 6.5.
6.4.1 The EKF Algorithm
For us to be able to apply the EKF algorithm as the facilitator of the supervised learn-
ing task, we have to linearize the measurement equation (6.5) by retaining first-order
terms in the Taylor series expansion of the nonlinear part of the equation. With b ( w n ,
u n ) as the only source of nonlinearity, we may approximate (6.5) as
d n ¼ B n w n þv n
(6 : 9)
where B n is the p -by- s measurement matrix of the linearized model. The linearization
process involves computing the partial derivatives of the p outputs of the MLP with
respect to its s weights, obtaining the required matrix
0
@
1
A
@b 1
@w 1
@b 1
@w 2
@b 1
@w s
...
@b 2
@w 1
@b 2
@w 2
@b 2
@w s
...
B ¼
(6 : 10)
.
.
.
.
@b p
@w 1
@b p
@w 2
@b p
@w s
...
where b i , i ¼ 1, 2, ...p , in (6.10) denotes the i -th element of the vectorial function
b (.,.), and the partial derivatives on the right-hand side of (6.10) are evaluated at
w n ¼ w njn 1 . Recognizing that the dimensionality of the weight vector w is s , it fol-
lows that the matrix product Bw is a p -by-1 vector, which is in agreement with the
dimensionality of the observable d .
6.5 EXPERIMENTAL COMPARISON OF THE EXTENDED KALMAN
FILTERING ALGORITHM WITH THE BACK-PROPAGATION AND
SUPPORT VECTOR MACHINE LEARNING ALGORITHMS
In this section, we consider a binary class classification problem as shown in
Figure 6.6( a ). It consists of three concentric circles of radii 0.3, 0.8, and 1. It is a
Search WWH ::




Custom Search