Information Technology Reference
In-Depth Information
where,
,
'
w
k
1
w
k
w
k
1
(6.17e)
and w represents the networks free parameter vector in general. The momentum
constant is usually less than one. Therefore, we can write mo < 1.
6.4.2.2 Improved Backpropagation Training Algorithm
To improve the training performance of the proposed neuro-fuzzy network, we
have modified the momentum version of the backpropagation algorithm by adding
to it the modified error index term/modified performance index term (6.18a), as
proposed by Xiaosong et al. (1995).
2
N
w
0
.
5
J
w
,
(6.18a)
¦
S
e
e
m
r
avg
r
1
where,
1
N
¦
w
.
(6.18b)
e
e
avg
r
N
r
1
and e avg is the average error. Thus, the new error index (new performance index) is
finally defined as
w
S
w
w
,
(6.19)
S
S
new
m
where, S ( w ) is the unmodified performance index as defined in (6.11b ). From this,
the corresponding gradient can be defined as
N
S
w
¦
e
w
w
e
w
w
w
(6.20a)
r
r
r
1
N
w
J
e
w
w
e
w
w
w
S
¦
e
(6.20b)
m
r
avg
r
r
1
N
w
e
w
J
e
w
w
e
w
w
w
¦
S
e
,
(6.20c)
new
r
r
avg
r
r
1
where the constant term (gama) J < 1 has to be chosen appropriately.
With the modified error index extension as per Equation (6.20c) we need only
to add a new vector term
with the original error vector
e .
Theoretical justification of the improved training performance of the network by
the use of a modified error index term has been described in Xiaosong et al.
(1995).
J
w
e
e avg
Search WWH ::




Custom Search