Information Technology Reference
In-Depth Information
discipline that study information processing even may be in high dimensions.
An improved understanding of how a single neuron operates has lead to an concrete
development of high-dimensional neurocomputing techniques. High-dimensional
neural networks accept and represent information in different components of high
dimensions as a single entity (cluster) thus allowing the processing of magnitude and
phase of a point simultaneously. Moreover, extensive studies carried out during the
past fewyears have revealed that high-dimensional neurocomputing paradigms enjoy
numerous practical advantages over conventional neurocomputing (deal with real-
valued data or in single dimension) applied for high-dimensional information. They
have proved to be powerful mathematical instrument for modeling typical systems.
Recently, there has been an increasing interest in high-dimensional neuro-computing.
It provides an easy, fast, and specific implementation of operations through high-
dimensional neural networks.
1.2.1 Literature Survey
Neuro-computing comprises of biological-inspired computing methodologies and
techniques are capable enough to address complex problems of the real world for
which conventional methodologies and approaches are ineffective or computationally
intensive. The history of high-dimensional neuro-computing starts with the devel-
opment of CVNN which can be traced to the ideas presented by N. Aizenberg in
1971 in the Soviet Union [ 13 ]. This direction was related to neurons with phase
dependent functions. These ideas were later developed by I Aizenberg in the form
of multivalued neurons and universal binary neurons [ 14 , 15 ]. The research in the
area took a different turn in the early 1990s with the publication of Back-Propagation
Algorithm in complex domain ( C BP). But the complex version of the BP made its
first appearance when Widrow, McCool and Ball (1975) announced their Complex
Least Mean Squares (LMS) algorithm. Kim and Guest (1990) published a complex-
valued learning algorithm for signal processing applications. The necessity came in
the form of capturing the phase information in signal processing applications where
complex numbers naturally enter the study and must be retained all through the
problem as they should be later interpreted. Lueng and Haykin (1991) published the
C BP in which the activation used in complex domain was an straight forward exten-
sion of the sigmoid function. Georgiou and Koutsougeras (1992) published another
version of the C BP incorporating a different activation function. The dynamics of
complex-valued networks was studied by Hirose (1992), which was later applied to
the problem of reconstructing vectors lying on the unit circle. Benvenuoto and Piazza
(1992) developed a variant of the C BP by extending a real activation function in com-
plex domain differently. A complex-valued recurrent neural network was proposed
by Wang (1992) that solved complex valued linear equations. Deville (1993) imple-
mented a complex activation function for digital VLSI neural networks that required
lesser hardware than the conventional real neural network would need. An extensive
study of the C BP was reported by T Nitta (1997) in which a learning algorithm along
 
Search WWH ::




Custom Search