Information Technology Reference
In-Depth Information
where u(n) and y(n), respectively, state the input and output of the model at discrete
time n; u(n), y(n)
.
Moreover, d y and d u are the output-memory and input-memory orders. d y rep-
resents the number of lagged output values, which is often referred to as the order of
the model, d u represents the number of lagged input values (d u , d y
ϵ <
1 and d u
d y ).
The vectors y(n) and u(n),
therefore, form the output and input regressors,
respectively.
The NARX model is commonly trained using two basic modes, namely:
1. Parallel (P) Mode
Using this mode, the output regressor utilized the estimated outputs which are
fed back to the regressor.
Þ ¼f
y
^
ð
n
þ
1
^
y
ð
n
Þ ; ...; ^
y
ð
n
d y þ
1
Þ;
u
ð
n
Þ;
u
ð
n
1
Þ ; ...;
u
ð
n
d u þ
1
Þ
ð 7 Þ
2. Series-Parallel (SP) Mode
Using this mode, the output regressor utilized the actual output values.
Þ ¼ ^ fy
y
^
ð
n
þ
1
ð
n
Þ ; ...;
y
ð
n
d y þ
1
Þ;
u
ð
n
Þ;
u
ð
n
1
Þ ; ...;
u
ð
n
d u þ
1
Þ
ð 8 Þ
It is worth to note that standard feed-forward architecture trained with back-
propagation (BP) technique can be used directly in the NARX mode of SP. In
addition, various learning algorithms are also widely applicable. A form of regu-
larization may also be employed because the additive measurement errors,
ϵ n ,
2 , can be also present.
Figure 4 illustrates the NARX with input and output tapped delay lines (TDL), in
parallel and series-parallel architectures (Neural Network Toolbox User
which are zero-mean Gaussian variables with Var[
ϵ n ]=
˃
'
s Guide
1992 ).
Fig. 4 Parallel and series-parallel architectures of NARX network
Search WWH ::




Custom Search