Information Technology Reference
In-Depth Information
3.3.3.8 Welsch Error Function
The Welsch EF is designed to suppress large errors and give a quadratic function
like performance in the vicinity of the origin. The function is convex with respect
to x -axis near the origin and has the asymptote y
c 2
2. The complex version of
the function retains the form but operates with complex errors. Convexity prevails
near the origin, the plane z
=
/
c 2
2 is an asymptote to the surface. On one hand, it
suppresses large errors while on the other hand it gives an quadratic function like
performance for small errors.
=
/
These error functions considered in this chapter are collected fromvarious sources and refined
into the neural network prospective. Taking this as the point of start, the basic question needs
attention from the viewpoint of the deriving learning rule for BP and the C BP. The interested
readers may easily obtain the learning rule for corresponding EF based backpropagation
algorithm by associating the definition of EF in Eq. 3.27 . The derivatives of these functions
may be easily computed to implement the basic update rule for training the R VNN and
C VNN. As a ramification of the fact, the additional factor that enters the weight update rule
in the BP and the C BP has the same form. How the BP and the C BP would performwhen the
EF is varied, validate the performances by applying them to some well-known benchmarks.
3.4 Learning in Complex Domain
The learning rules correlate the input-output mapping of the nodes by iteratively
adjusting the interconnection weights. It can be seen that the learning algorithm in
complex domain is steadily gaining prominence yet is in an embryonic stage. The
avenues for the BPA further open up as the survey indicates but once established the
C BP can compete with the BPA in problems where both could be applied. Needless to
state that the C BP would be preferred over BPA in applications that demand the real
and imaginary parts of complex numbers and functions be retained and no modeling
involving a tailoring with these quantities may be allowed. Such applications require
that the physical significance of the complex numbers be kept intact. Typically,
signal processing is one area where such requirements exist. The standard real back-
propagation ( R BP ) learning is a widely used straight forward algorithm, but it has
limitations such as slow convergence, may even get trapped in local minima and
low degree of accuracy in many cases. However, the C BP algorithm improves these
issues considerably.
Various gradient-based learning algorithms in complex domain had been devel-
oped [ 7 , 11 , 17 , 19 , 21 ] in last few years. The theoretical aspects of these algorithms
have diverse viewpoints depending upon the complex-valued nonlinear activation
functions used [ 24 ]. The characteristics of complex nonlinearities and associated
learning algorithms are related to the distinguishing features of complex-valued non-
linear adaptive filtering [ 25 ]. It is worth mentioning that complex back-propagation
( C BP) algorithm reduces the probability for standstill in learning and improves the
learning speed considerably [ 1 , 7 ]. The performance of C BP has been found far supe-
rior for complex-valued [ 1 , 22 , 26 ] as well as for real-valued [ 3 , 8 , 24 ] problems.
 
Search WWH ::




Custom Search