Information Technology Reference
In-Depth Information
Fig. 2.53. Canonical form of the knowledge-based model discretized by the explicit
Euler method
Levenberg-Marquardt or BFGS algorithm), using for instance a semidirected
algorithm under the output noise assumption. For that training, it would be
reasonable to initialize weight w to 8.32. Note that, in that very simple case,
step 2 of the algorithm is bypassed.
Figure 2.54 shows the modeling error with that improved model. The mean
square modeling error on the test sequence is 0.08 (instead of 0.17 for the
knowledge-based model); since the noise variance is 0.01, further improvement
may be expected from a more elaborate model.
Therefore, one considers the second level of criticism towards the knowledge-
based model, i.e., the fact that the right-hand side of the state equation might
be a nonlinear function of x 1 . Therefore, neuron 2 is replaced by a feedforward
neural network whose input is x 1 , as shown on Fig. 2.53 with three hidden
neurons (hence 6 parameters shown on the figure, and 4 parameters related
to the bias, not shown).
The feedforward neural network made of the non-numbered neurons shown
on Fig. 2.55 can be trained from data generated by the knowledge-based
model (step 2 of the design procedure): although those values are known to be
inaccurate, the weights resulting from that training are reasonable estimates,
which are subsequently used for initializing the training of the neural network
from experimental data (step 3 of the design procedure). Figure 2.56 shows
the modeling error of that model, with two hidden neurons in the black-box
part of the model (additional neurons generate overfitting). The mean square
modeling error on the test sequence is 0.02, which is a sizeable improvement
over the previous model.
Search WWH ::




Custom Search