Digital Signal Processing Reference
In-Depth Information
The learning-rate parameter of the back-propagation algorithm was fixed at 0.01.
For the MLPs trained by the BP and the EKF, the initial weights were set up as
described in Section 6.4. For the EKF, two more covariance matrices were additionally
required.
1. The covariance matrix of dynamic noise Q n was annealed such that Q n ¼
( l 1) P njn ,where P njn is the error covariance associated with the weight
estimate at time instant n , and l [ (0, 1) is the forgetting factor as defined in
a recursive least-squares algorithm; this approximately assigns exponentially
decaying weights to past observables; l was fixed at 0.9995.
2. The variance of measurement noise R n was fixed at unity.
For the SVM, the kernel in he RBF network was chosen to be the Gaussian radial-
basis function. A kernel of width 0.2 was found to be a good choice for this pro-
blem. This value was chosen based on the accuracy of test classification results. The
soft margin of unity was found to be more appropriate for this problem. The quadratic
programming code, available as an in-built routine in the MATLAB optimization tool-
box, was used for training the SVM.
For the purpose of training 1000 data points were randomly picked from the con-
sidered region. In the MLP-based classification, each training epoch contained 100
examples randomly drawn from the training data set. For the purpose of testing, the
test data were prepared as follows: A grid of 100 100 data points were chosen
from the square region [ 2 1, 1] [ 2 1, 1] [see Fig. 6.6( a )] and then the grid points
falling outside the unit circle were discarded. In so doing, we were able to obtain
7825 test points.
At the end of every 10-training epoch interval, the MLP was presented with the test
data set. We made 50 such independent experiment trials and the ensemble-averaged
correct classification rate was computed. In contrast, for the SVM, this test grid was
presented at the end of the training session. Figures 6.7( a ) and 6.7( b ) show the results
Figure 6.7 Ensemble-averaged (over 50 runs) correct classification rate versus number
of epochs.
 
Search WWH ::




Custom Search