Information Technology Reference
In-Depth Information
the results obtained in the Section 6.4.2.1 will be used, where the derivatives of the
sum square error S with respect to the network's adjustable parameters (free-
parameters)
l j for the fuzzy logic system (6.9a) - (6.9c) were
already computed and listed in (6.15a) - (6.15e).
Now, considering the singleton consequent part (constant term) of the rules and
taking into account Equation (6.15a), we can rewrite the gradient S
ij
c i
i
,
,
,
and
T
T
V
0
l
V
j {
as
T 0
^ `
l
l
l
V
{
w
S
w
b
f
d
,
(6.27)
T
T
z
0
j
0
j
j
j
where f j is the actual output vector from the j th output node of the Takagi-Sugeno-
type multiple input multiple output neuro-fuzzy network and d j is the
corresponding desired output vector at the j th output node for a given set of input-
output training data. Taking into account Equation (6.27) and comparing it with
(6.23a), where the gradient is expressed using the transpose of the Jacobian matrix
multiplied by the network's error vector, i.e .
,
T
V
w
w
e
w
(6.28)
J
where w is the free parameter of the network, the transpose of the Jacobian matrix
and the Jacobian matrix
T
T l
T l
for the free parameter T l
J
of the neuro-
J
0
j
0
j
0
j
fuzzy network can be defined by
T
l
l
b
(6.29a)
J
T 0
z
j
>
@ > @
T
T
T
l
l
J
{
l
.
(6.29b)
T
J
T
b
z
0
j
0
j
This is because the prediction error at the j th output node of the Takagi-Sugeno-
type neuro-fuzzy network is
.
{
f
(6.30)
e
d
j
j
j
However, if we consider the normalized prediction error of the network at the j th
output node, instead of the original prediction error at the j th output node, then by
applying a similar technique, the transposition of the Jacobian matrix
T
T l
and
J
0
j
the Jacobian matrix
T l
itself for the free parameter T l
J
will be
j
0
j
0
T
l
l
(6.31a)
J
T 0
z
j
>
@ >@
T
T
T
l
l
J
{
l
,
(6.31b)
T
J
T
z
0
j
0
j
Search WWH ::




Custom Search