Information Technology Reference
In-Depth Information
A feedforward neural network implements the function ϕ RN . An input
of the network is made of the signal values from time k to time k
p +1
(output of the process f interest) and of the control values from time k to
time k
r + 1 (input of the process of interest). In that case, p is the order of
model with respect to the state and r is the order of the model with respect
to the control. The estimation is based on the minimization of the modeling
error, i.e., the difference between the output of the process x ( k +1) and
the prediction g ( k + 1) that has been produced by the model. It follows the
strategy of parameter estimation that was presented in Chap. 2 (see dynamical
modeling with state noise assumption and input-output representation).
The training is a set of input vectors of the type x k
=[ x ( k ); ... ; x ( k
p +1); u ( k ); ... ; u ( k
r + 1)] and of associated output scalars of the type
g k = x ( k + 1). Two strategies can be used for building the training set:
If a simulator of the process is available, it will be used to build the train-
ing set. In that case, one has the freedom of choosing a representative
sampling of the network output. To that effect, one can select either a reg-
ular sampling of the input space, or select the input samples according to
a probability law, which favors the usual operating region of input space.
Sometimes, on the contrary, the limit operating points and the boundary
of the safety domain are favored to ensure security and accuracy of the
representation on the entire operation domain. That situation, where a
simulator is available, is common when one is looking for a semi-physical
representation or “grey-box model” (see Chap. 2).
By contrast, if training is performed from actual experimental data, a sam-
pling of input space cannot be chosen at will: the training set is obtained
from the sampling of the input-output experimental trajectory. In that
case, it is important to use the experimental device with correct initial-
ization and for a su ciently long time in order to visit with a su cient
frequency the input space of the network (which is basically the prod-
uct of the state space by the control space up to the order of the NARX
model). To identify a controlled dynamical system, one generally excites
the system with open-loop randomly generated control. The selection of
an appropriate control trajectory is tricky. In the case of linear systems,
harmonic excitations are su cient to identify the system via the transfer
function. For nonlinear systems, one has to mix the use of a random gen-
erator and physical knowledge about the system. Sliding frequency control
signals may be used or filtered noisy control signals. Chapter 2 provides
some elements that are useful for experiment design.
Figure 4.9 shows an identification example of the Van der Pol oscillator.
Neural model has been built from a basis of 15 3 = 3375 examples. The ex-
amples were provided by the sampling of the input-output trajectory of the
oscillator, subject to a random control signal. That training set was used be-
fore, for linear regression, as shown on Fig. 4.7. The results are far better
here.
Search WWH ::




Custom Search