Information Technology Reference
In-Depth Information
can be simulated on digital computer. In fact, this direction has provoked an effec-
tive development in machine intelligence, bioinformatics, computer vision and other
computer science and engineering applications. An artificial neuron is the simplified
model of a biological neuron which can approximate its functional capabilities. But,
for the time being, it is far from clear how much of this simplicity is justified, since
at present we have only a poor understanding of neuronal functions in biological
neuron. Conventional neurons, based on radial basis function (Rbf) or summation
aggregation function, were thoroughly used in first generation ANN. However, net-
works based on these neurons take a large number of neurons which increases the
complexity of the topology, training time and memory requirement in experiments.
The problem is circumvented by higher-order neurons in second generation. Their
networks have shown improved resultswith fewer neurons. However, they suffer from
a typical curse of dimensionality due to a combinatorial explosion of terms [ 31 - 33 ].
Therefore, it is desirable to investigate some potential neuron models which capture
nonlinear correlation among input components but are free from this problem. This
topic also focuses on the design and assessment of such higher-order neurons. Super-
vised learning schemes in high dimensions are discussed. Unsupervised learning in
a complex domain has been presented for extraction of lower dimensional features
with significant discriminating power. Many real applications in the areas of adaptive
computing involve signals that are inherently high dimensional. The physical char-
acteristics of these signals and their nonlinear transformations can be approximated
efficiently if they are represented and operated as a cluster (single entity) of compo-
nent signals. The development of high-dimensional neural networks to preserve and
process these signals in high dimensions itself is gaining more attention.
The theories and practices presented in this topic is an attempt to bridge the
gaps among the prominent concepts in second generation neurocomputing. Though
theory is getting maturity but the applications are just beginning to be understood,
which is clear from the work available in the area. Most practical problems that
come from various fields (Robotics, Medicine, Industry, Military, Aviation, and so
on) that involve modeling with high-dimensional neural networks. Their applica-
tions will become clear once they are applied to some standard problems. Therefore,
major issues raised are addressed in the topic by applying high-dimensional neural
network to the problems of classification, approximation, function mapping, and
pattern recognition. The theory and application of complex variable-based neural
networks, the most basic case of high dimensional neurocomputing, have been given
rigorous attention from the viewpoint of the second generation neurocomputing.
The update rule is exactly same as the one used while running the conventional
error BP algorithm to train an ANN. However, it must be noted that the complex
number comes with the phase information embedded into it. This amounts to saying
that the information that would have separately been input to the ANN while train-
ing (as is usually done while training ANNs) gets coupled resulting in a decrease
of the number of inputs by as much as half (as two real numbers make one com-
plex number and phase information is embeded in it) at the same time preserve the
information in the form of phase. There exist additional constraints of analyticity
of functions in complex variable setup. It is hence not clear how the new learning
Search WWH ::




Custom Search