Information Technology Reference
In-Depth Information
the mean ( c i ) and variance ( V i ) parameters of the Gaussian membership
functions, so that the performance function (6.11a) is minimized. For convenience,
we replace
j p
p
j
d
,, in the above definition of error by f j , d j , and e j
respectively, so that the individual error becomes
f
and
x
e
j
.
We recall that the steepest descent rule used for training of neuro-fuzzy
networks is based on the recursive expressions
f
d
e
j
j
j
l
(
k
)
l
( )
k
ww
K
S
l
(6.12a)
T
T
T
0
j
0
j
0
j
l
(
k
)
l
( )
k
ww
K
S
l
(6.12b)
T
T
T
ij
ij
ij
l
(
k
ww
)
l
( )
k
K
S
l
(6.12c)
c
c
c
i
i
i
l
(
k
)
l
( )
k
ww
K
S
l
(6.12d)
V
V
V
i
i
i
where S is the performance function (6.11b) at the kth iteration step and
T T V are the free parameters of the network at the
same iteration step, the starting values of which are, in general, randomly selected.
In addition, K is the constant step size or learning rate (usually
l
j k,
l
k,
l
k
,and
l
k
c
0
ij
i
i
K ), i = 1,
2, ..., n (with n as the number of inputs to the neuro-fuzzy network); j = 1, 2, ..., m
(with m as the number of outputs from the neuro-fuzzy network); and l = 1, 2, 3,
..., M (with M as the number of Gaussian membership functions selected, as well as
the number of fuzzy rules to be implemented).
From Figure 6.6, it is evident that the network output f j and hence the
performance function S j and, therefore, finally S depends on
1
l
l
T T only
through y j l . Similarly, the network output f j and, thereby, the performance functions
S j and S depend on
and
0
j
ij
V only through z l , where, f j , y j l , b ,and z l are
l
and
l
c
i
i
represented by
M
j l
l
f
y
(6.13a)
¦
h
j
l
1
l
l
l
l
l nj
y
T
"
(6.13b)
T
x
T
x
T
x
j
1
2
n
0
j
2
j
1
j
M
l
l
l
b
,
and
b
¦
(6.13c)
h
z
z
l
1
2
§
·
§
l
·
xc
n
i
i
¨
¸
¨
¸
l
–
exp
(6.13d)
z
¨
¨
i
¸
¸
V
i
1
©
¹
©
¹
Therefore, the corresponding chain rules
Search WWH ::




Custom Search