Information Technology Reference
In-Depth Information
It is imperative to consider an automated and robust learning method with a good
performance and free from limitations of BP. In this chapter, a modified real-resilient
propagation ( R RPROP ) learning algorithm proposed in [ 27 ] has been extended in
complex domain. Moreover, an improved complex-resilient propagation ( C RPROP )
algorithm with error dependent weight backtracking step has also been presented
for efficient learning. These learning algorithms have been defined with bounded but
nonanalytic complex activation function.
3.4.1 Complex Back-Propagation ( C BP) Learning
The C BP algorithm for CVNN have been proposed by several researchers in recent
years [ 1 ]. The C BP is a complex domain version of R BP . The aim is to approximate
a function which will map the inputs to the outputs by using a finite set of learning
patterns
(
z
,
y
)
.
y
=
f
(
z
,w)
where
w
C corresponds to all weights and thresholds in neural networks, z
C
corresponds to all complex valued training input patterns, and y
C corresponds
to all complex valued training output patterns. There are two broad classes in the
C-BP proposed by several researchers. One is based on the activation function which
maps complex number to complex number through a complex function. In the other
approach, the complex variable is split into two parts, real and imaginary, and then
the activation function is separately applied to these parts to obtain the real and
imaginary parts of the complex output.
In order to directly process complex signals by an artificial neural networks, var-
ious gradient-based learning algorithms in complex domain had been developed in
[ 7 , 11 , 19 , 21 ]. The theoretical aspects of these algorithms have diverse viewpoints
depending upon the complex-valued activation functions used, as explained in pre-
vious section. The derivation of learning rules in this chapter are based on split-type
activation function, as given in Eq. 3.3 .The C BP with this activation function com-
promises the analytic property of the activation function for boundedness. With split
activation function, the update rules are linear combination of derivatives of real and
imaginary components of an activation function. Both real and imaginary parts of
the weights are modified as the function of real and imaginary parts of signals [ 9 ].
This structure reduces the probability of standstill [ 7 , 9 ]in C BP as compared to
R BP and enhances average learning speed. The unit of learning is complex valued
signal and learning in complex domain neural network is adjusting 2D motion [ 7 ].
The gradient descent-based error back-propagation is a very popular learning
procedure for feed-forward neural networks. This conventional back-propagation
learning algorithm in real domain ( R BP ) have been extended to ( C BP ). Let e n
=
Y n be the difference between actual ( Y n ) and desired ( D n ) output of n th neuron
in output layer. The real-valued cost function (MSE) can be given as:
D n
 
Search WWH ::




Custom Search