Information Technology Reference
In-Depth Information
this is because the normalized prediction error at the j th output node of the multi-
input multi-output neuro-fuzzy network is:
b
{
f
.
(6.32)
e
d
j
normalized
j
j
In the above equation, z l is a matrix of size
M u that contains the degree of
fulfilment (firing strength) of each fuzzy rule computed for a given set of training
samples, where M is the number of fuzzy rules (and also the number of Gaussian
membership functions implemented for fuzzy partition of input universes of
discourse) and N is the number of training samples (input-output data samples).
Adopting a similar technique and taking into account Equation (6.28), the
original prediction error (6.30) and Equation (6.15b), which computes the
derivative of S with respect to T ij , we can get the transposition of the Jacobian
matrix and its further transposition, i.e . the Jacobian matrix itself, for the network's
free-parameter T ij using
N
x
T
ij
l
b
(6.33a)
J
T
z
i
.
T
T
ª
T
l
º ª
º
l
J
ij
{
b
(6.33b)
T
J
T
z
x
i
ij
«
» ¬
¬
¼
¼
Also, instead of the original prediction error, if here we consider the normalized
prediction error of Equation (6.32) and, as usual, Equations (6.28) and (6.15b),
then we can get the transposed Jacobian matrix and the Jacobian matrix itself for
the same parameter T ij as
T
ij
l
(6.34a)
J
T
x
z
i
T
T
ª
º ª º
T
l
l
J
l
ij
{
(6.34b)
T
J
T
x
z
i
«
ij
» ¬ ¼
¬
¼
Finally, to compute the Jacobian matrices and their transpositions for the
remaining free parameters of the network, i.e . for parameters
i
V i
c and, , we also
use a similar technique, whereby Equation (6.15e), which computes the term A , has
to be reorganized.
Let us denote
l
D
{
y
f
.
(6.35)
j
j
j
Using Equations (6.30) and (6.35) we can rewrite (6.15e) as
Search WWH ::




Custom Search