Information Technology Reference
In-Depth Information
function.'
Hence,
for
a
nontrivial
complex-valued
function,
analyticity
and
boundedness cannot hold together.
The contra-positive of the statement of theorem puts it in the most usable form as it spells out
conditions that serve as search tools when one embarks on a search for C AF. It states that a
nonconstant complex valued function should either be nonanalytic and bounded or analytic
and unbounded or nonanalytic and unbounded. The three possibilities must be verified, as
the new complex activation must clear this constraint. It hence follows that at least one of the
above three conditions must be satisfied for otherwise the activation would turn out trivial,
the constant complex function.
3.2.1 Why Vary Activation Functions
A practical implementation of learning is not easy task in CVNN as in case of real-
valued counter part. It depends on several factors but the important are the chosen
architecture of CVNN and activation function of complex-valued neurons. Literature
review in the area revealed that many questions about the architecture of the CVNN
and activation functions employed have remained open as investigators have either
not addressed them or have given partial information on these points of interest. It
was discovered during the course of research that some reported results contradicted
each other. For example, Leung and Haykin (1991) claimed that the fully complex
activation function ( C AF) given by the formula, Eq. 3.1 (where z is the net potential to
the neuron in complex domain), converged in their experiment, while T Nitta (1997)
reported that the same C AF never converged in his experiments. Leung and Haykin
(1991) also stated that the singular points of the C AF could be circumvented by
scaling the inputs to a region on the complex-plane but no procedure to implement
this was described. The above facts clearly indicate the need for comprehensive
investigations to establish the properties of the CVNN. It is very important to keep in
mind that the basic complex theory states to choose either analyticity or boundedness
for an activation function of complex-valued neuron, in view of Liouville's theorem.
It is noteworthy that there is a wide direction in complex-valued neurons depend-
ing on different activation functions and architecture; hence there are different
complex-valued neural networks. The first complex version of steepest descent learn-
ing method made its appearance when Widrow et al. (1975) in USA presented the
complex least mean square (LMS) algorithm [ 20 ]. Researches in this area took a
significant turn in early 1990s, when various scientists have independently presented
complex back-propagation algorithm (BP) with different activation functions. In
1991, Haykin considered complex-valued neuron on the basis of straightforward
extension of real sigmoid activation function for complex variables [ 21 ]. Hence
evolved fully complex-valued neuron based on analytic property of function in
complex domain. They analyzed the dynamics with partial derivatives in real and
imaginary parts. Later, in 2002, T Adeli et al. presented fully CVNNs with different
analytic activation functions [ 11 ]. In 1992, Piazza considered a complex-valued neu-
ron on the basis of 2Dextension (real-imaginary type or split-type activation function)
 
Search WWH ::




Custom Search