Information Technology Reference
In-Depth Information
Fig. 2.31. The ideal model for an input-output representation with state noise
assumption
Training of the Model: Directed (Teacher-Forced) Training
Since the ideal model is a feedforward neural network, it is trained with the
techniques that were discussed in the section devoted to the training of static
models. Training is called directed or teacher-forcing.
Operation of the Model
Since the inputs of the predictor are (in addition to control inputs) the mea-
sured outputs of the process, the output of the model can be computed only
one step ahead of time; the predictor is said to be a “one-step ahead predic-
tor”. If the model is intended for use as a simulator, i.e., for predicting the
process output on a time horizon that exceeds one sampling period, the inputs
are necessarily the previous outputs of the predictor: the latter is no longer
operated in optimal conditions.
Output Noise Assumption (Input-Output Representation)
Now we make a different assumption, namely, that the process can be appro-
priately described, in the desired validity domain, by a representation of the
form
x p ( k )= ϕ ( x p ( k
1) ,...,x p ( k
n ) ,u ( k
1) ,...,u ( k
m ))
y p ( k )= x p ( k )+ b ( k ) .
Therefore, the present assumption considers that the noise is additive on the
output (Fig. 2.32). Thus, it appears outside the loop, hence it has an influence
on the output at the same time step only. That assumption is known , in linear
adaptive modeling, as “output error” or “parallel” [Narendra et al. 1989]. Since
the output at time k is a function of the noise at the same time step only, the
model that is sought should not involve the past process outputs. Therefore,
we consider a recurrent neural network, shown on Fig. 2.33, which obeys the
equation
g ( k )= ϕ NN ( g ( k
1) ,...,g ( k
n ) ,u ( k
1) ,...,u ( k
m ) , w ) ,
Search WWH ::




Custom Search