Graphics Reference
In-Depth Information
Finally, we provide the mathematical basis by which even linear models can be
derived from the non-linear models obtained from empirical data in Section 4.6.
4.6 LINEARISED APPROXIMATION FROM NON-LINEAR MODELS
This section introduces linearisation of a non-linear system model (at a particular
operating point) such as the ANN described previously. The need to extract linear
properties of a non-linear model often arises because many systems function largely
in a specific range instead of spanning an entire operating range. Furthermore, it is
easier to work with linear systems due to the mathematics involved. With reference
to FigureĀ 4.7, it is possible to envision how a non-linear function may be approxi-
mated by a series of linear segments over different ranges.
We provide the mathematical derivation for linearisation of a DTDNN below.
The linearised model takes the form of state space [8] equations that are common
in many system identification and control studies. In addition to their maturity in
the field of mathematics, state space equations provide mathematical constructs that
leverage linear, first-order derivative variables that are convenient for both computa-
tion and extension. For linear systems, the state space equations are:
(
) =
() +
() +
()
xk
+
1
Ax k
uk
Ke k
(4.16)
() =
() +
()
yk
Cx kDuk
(4.17)
where x (k) is the state vector, y (k) is the system output, u (k) the system input, and
e (k) the stochastic error. A , B , C , D , and K are the system matrices.
Equations (4.16) and (4.17) describe the relationship of the internal states, input,
and output of the system. The state variables are denoted by x 1 and x 2 while the input
and output of the neural network are u and y , respectively. W i denotes the weights
assigned at the neurons on layer i while B i refers to the corresponding bias value on
the same layer. The triggering function at each layer of the neural network is denoted
by F i , which typically may be linear, sigmoid, or threshold in nature.
Unit time delays were introduced at the input stage of each layer, as denoted by
the d i blocks. Since the time delays relate to the dynamics of the neural network, the
related equations are presented with a time step variable k that indicates its corre-
spondence in terms of implementation in digital systems such as computers.
(
) =
()
xk
+
1
uk
(4.18)
1
(
)
(
) =
() +
1
1
1
xk
+
1
FWxk B
(4.19)
2
1
(
)
() =
() +
2
2
2
yk
FWxk B
(4.20)
2
Search WWH ::




Custom Search