Information Technology Reference
In-Depth Information
Fig. 2.38. Modeling error for a process with state noise after training according to
the state noise assumption
State Noise Assumption
Finally, we make the (right) assumption that the noise is state noise. The ideal
model is a feedforward neural network. Figure 2.38 shows that the modeling
error is white noise with amplitude 0.5: the ideal predictor was thus obtained.
2.7.2.3 Output Noise and State Noise Assumption (Input-Output
Representation)
Now we make the assumption that the noise has an influence both on the
output and on the state; the process can be appropriately described by a
model of the form
x p ( k )= ϕ ( x p ( k
1) ,...,x p ( k
n ) ,u ( k
1) ,...,u ( k
m ) ,b ( k
1) ,...,
p ))
y p ( k )= x p ( k )+ b ( k ) ,
b ( k
as shown on Fig. 2.39. That assumption is sometimes called NARMAX (non-
linear autoregressive with moving average and exogenous inputs).
In the present case, the model must take into account both the past values
of the process output and the past values of the model output.
2.7.2.4 Summary on the Structure, Training, and Operation of
Dynamic Input-Output Models
Table 2.1 summarizes the noise assumptions and their consequences on the
raining of input-output models.
2.7.2.5 State-Space Representations
We consider here the same assumptions as in the previous section, but we
discuss their consequences on state-space models.
Search WWH ::




Custom Search