Information Technology Reference
In-Depth Information
2
N
N
1
2 N
1
2 N
2
2
E
=
1 |
e n |
=
( (
e n ))
+ ( (
e n ))
(3.25)
n
=
n
=
1
Cost function E is a scalar quantity which C BP algorithmminimizes bymodifying
the weights and biases of the network. As this function is a real-valued nonanalytic
function, the partial derivative of E with respect to the real and imaginary part of
the weights and biases are found separately. The weights and biases are recursively
altered by applying gradient descent on the energy function E, given by,
new
old
w
= w
ʷ w
E
,
(3.26)
where, the gradient
E is derived with respect to both real and imaginary parts
of complex weights. The weight update is proportional to the negative of gradient,
hence
w
w =− ʷ w
E
=− ʷ (w)
E
j
ʷ (w)
E
E
(w) +
E
(w)
=− ʷ
j
×
(3.27)
3.4.2 Complex Resilient Propagation ( C RPROP) Learning
The R BP algorithm is widely used for training neural networks because of ease in
implementation. However, the slow rate of convergence and getting stuck into local
minima are the major limitations in the algorithm performance. Further, some mod-
ifications and variations to the basic error back-propagation procedure, like addition
of momentum term, modified EF [ 28 , 29 ], Delta-Bar-Delta algorithm [ 30 ] and Quick
Prop [ 31 ] were suggested to overcome these problems. However, none of these mod-
ifications have accelerated the rate of convergence to a large extent. For efficient
learning and faster convergence, resilient propagation in real domain ( R RPROP )
[ 32 ] was proposed. The basic principle in this algorithm is to eliminate the harmful
influence of the size of partial derivatives of EF on weight update and adaptation is
made dependent on the sign of the derivative. Further modification in real resilient
propagation without increasing the complexity of algorithm was suggested [ 27 ]for
significant improvement in the learning speed. This local gradient based adaptation
technique can learn quickly with clearly lesser number of computations. Resilient
back-propagation ( RPROP ) is a local adaptive learning scheme, performing super-
vised batch learning in multi-layer neural network. It is basically aimed at elimi-
nating the harmful influence of the size of the partial derivative on the weight step.
In RPROP, only the sign of the derivative is considered to indicate the direction of
the weight update and the size of the weight change is exclusively determined by
a weight-specific update value. The complex-RPROP algorithm can be derived by
extending the real RPROP to the complex domain.
 
 
Search WWH ::




Custom Search