Information Technology Reference
In-Depth Information
The weight update in back-propagation learning depends on the size of partial
derivative of EF, while in resilient propagation (RPROP) it is independent of the size
of partial derivative. The weight update in RPROP depends only on the temporal
behavior of its sign. This result in faster convergence of the learning process [ 32 ].
The C RPROP algorithm has been derived by extending the real RPROP given in
[ 27 ] to complex domain. The aim of the C RPROP algorithm is to modify the real
and imaginary parts of the complex weights by an amount
))
(update values), in such a way so as to decrease the overall error. These update
values solely determine the size of weight update
((
t
))
and
((
t
(w(
t
))
and
(w(
t
))
and
the sign of partial derivative (
E
/∂ (w)
) and (
E
/∂ (w)
) determine the direction
of each weight update. (
) are the gradient summation
over all patterns of the pattern set. The update value is initialized to
E
/∂ (w)
) and (
E
/∂ (w)
0 and then
it is modifying according to gradient direction, as given in algorithm. Some other
parameters are also set at the beginning of resilient propagation algorithm. They are
increase factor (
μ + ), decrease factor (
μ ), minimum step size (
min ) and maximum
step size (
max ). The weight updates are computed as follows:
sign
)
(w)
E
(
t
(w(
t
)) =−
((
t
))
(3.28)
sign
)
(w)
E
(
t
(w(
t
)) =−
((
t
))
(3.29)
w
changes its sign, which indicates that the last update was too big and the update value
is decreased by the factor
In other words, every time the partial derivative of the corresponding weight
μ . If the derivative retains its sign, the update value is
slightly increased by factor
μ + , in order to accelerate the convergence. So there is
effectively growing and shrinking of the update value according to the sign of the
gradient. The factors
μ and
μ + are set to the 0.5 and 1.2, respectively.
3.4.3 Improved Complex Resilient Propagation ( C -iRPROP)
The change of sign of the partial derivative in successive steps is considered as a
jump over minima, which results in reversal of previous weight update [ 32 ]. This
decision does not take into account whether the weight update has caused an increase
or decrease of error. This backtracking step does not seem proper especially when
the overall error has decreased. Therefore, here we present the C RPROP algorithm
with error dependent weight reversal step and the resulting algorithm has shown
excellent performance. Assuming that the network is close to the (local) minimum,
the each weight update not leading to a change of sign of the corresponding partial
derivative leads the algorithm closer to the optimum value. Hence, previous weight
update is reverted only when it has caused the change of sign in corresponding partial
derivative in case of an overall error increase:
 
 
Search WWH ::




Custom Search