Information Technology Reference
In-Depth Information
Fig. 2.30. Input-output representation, state noise assumption
where y p ( k ) is the measured process output. We assume that additive noise oc-
curs at the process output (see Fig. 2.30), and that, at time k , noise influences
the present output, and also the n past outputs. In nonlinear modeling, that
assumption is known as NARX (Nonlinear Auto-regressive with eXogenous
inputs) (see also Chap. 4) or equation error (see for instance [Ljung 1987;
Goodwin et al. 1984]), or series-parallel [Narendra et al. 1989] in adaptive
modeling.
Instead of the term assumption, the term postulated model is sometimes
used in the statistics literature.
We assume that noise acts on the output, not only directly at time k ,
but also through the outputs at the n previous time steps; since the model
that is sought should be such that the modeling error at time k is equal to
noise at the same time step, it should take into account the process outputs
at the n previous time steps. Consider the feedforward neural network shown
on Fig. 2.31; it obeys the equation
g ( k )= ϕ NN ( y p ( k
1) ,...,y p ( k
n ) , u ( k
1) ,..., u ( k
m ) , w ) ,
where w is a vector of parameters, and where function ϕ NN is performed by
the feedforward neural network. Assume that the neural network ϕ NN has
been trained, i.e., that a vector of parameters w has been found such that
the network computes exactly function ϕ . Then relation y p ( k )
g ( k )= b ( k )
holds for all k . Thus, the model is such that the modeling error is equal to
the noise of the process: it is the ideal model, since it captures all that is
deterministic in the representation and does not model noise. Note that the
inputs of the model are the control inputs and the measured process outputs:
the ideal model (also called “predictor”) is not trained as a recurrent neural
network.
Search WWH ::




Custom Search