Information Technology Reference
In-Depth Information
ARMAX models or NARMAX models (with X for exogenous). In these mod-
els, the evolution equation takes into account exogenous variables at current
instant or in the past. These exogenous variables are known and are the exact
equivalent of the control signal. So we get the ARMAX ( p,q,r )model,
x ( k +1)= a 1 x ( k )+ ... + a p x ( k
p +1)+ b 0 v ( k +1)+ b 1 v ( k )+ ...
+ b q v ( k
q +1)+ c 1 u ( k )+
···
+ c r u ( k
r +1) ,
and the NARMAX ( p,q,r )model,
x ( k +1)= f [ x ( k ) ,...,x ( k
p +1) ,v ( k +1) ,v ( k ) ,...,
v ( k
q +1) ,u ( k ) ,...,u ( k
r +1)] .
4.1.9 Limits of Modeling Uncertainties Using State Noise
We introduced in the previous sections the state noise ( v ( k )), which models
uncertainty on the state variables using random variable and the probabilis-
tic framework. Of course this type of model is relevant if the uncertainty is
subject to statistical regularity that enables to identify some knowledge about
this uncertainty and to improve prediction and the quality of control. Yet, it is
not always the case and the occurrence of uncertainties or unknown that are ill
represented by random variables is an intrinsic limitation of any statistical al-
gorithm. A good example of this situation is the example of a non-cooperative
target tracking: if we model the unknown control of the target by a stochastic
process, the intention of the pilot is badly represented by such a statistical
modeling.
In that case, when there is no other specific knowledge, probabilistic frame-
work is just a less evil. Then it is important to use all the available information
rather than to represent all the ill-identified variables in a large-dimensional
stochastic process. The number of unknown parameters to be identified has
to be reduced. These considerations support the use of parsimonious models
and among them neural networks as it was explained in Chap. 2.
4.2 Regression Modeling of Controlled Dynamical
Systems
4.2.1 Linear Regression for Controlled Dynamical Systems
4.2.1.1 Outline of the Algorithm
In Chap. 2, linear regression was described as the task of finding the ( n, 1)
column vector w =( w 1 ; ... ; w n ) that minimizes the sum of the squared errors
(SSE)
Search WWH ::




Custom Search