Information Technology Reference
In-Depth Information
4.5 Concluding Remarks
It is well known that the conventional real-valued neuron takes large number of neu-
rons in a network, which indeed increases the complexity of a network, when used to
solve any single- or high- dimensional problem. The higher the complexity of a net-
work, greater will be its computational complexity, time, and memory requirement.
The complexity of ANN can only be reduced by using less number of neurons or by
using neurons which can process high-dimensional data as a single quantity. In case
of real-valued problems, the complexity of ANN can also be effectively reduced by
implementing through complex-valued neural network. The trade-off between con-
sidering complex domain implementation and higher-order neural network lead to
lesser number of learning cycles, better class distinctiveness and superior mapping
accuracy in simulation results. The conventional neuron in a MLP or C MLP has a
linear correlation among input signals; thus such a neuron model, when used to solve
the problems, always appear inferior in all respects to achieve performance similar
to higher-order neurons. The fact that the number of unknowns (learning weights)
to be determined in such a network grows with the number of neurons and hidden
layers. Which in turn lead to quiet slow processing of neural network.
Neural networks today are much more than just the simple network of conven-
tional neurons. The new insights available from neuroscience have presented nonlin-
ear neuronal activities in a cell body. This motivates one to investigate the feasibility
of constructing nonlinear aggregation functions, which will serve as a basis for con-
struction of powerful neuron models. The various researchers have described the
power and other advantages of higher-order neuron with respect to the conventional
neurons. The computational power of the neuron depends on its order, a higher-order
neuron can posses better mapping and classification capabilities. However, with an
increase in number of terms in the polynomial expression for the higher-order neuron,
it is exceedingly difficult to train a network of such neurons. Considering this basic
drawback of higher-order neuron, this chapter presented three efficient neuron mod-
els for science and engineering applications. Unlike the other higher-order neurons,
these models are simpler in terms of its parameters and do not need to determine the
monomial structure prior to training of the neuron model. The weight update rules
using backpropagation learning algorithm is provided for the feedforward neural net-
works for presented models. These models can serve as universal approximators and
can be conveniently used with conventional or other neuron in a network. The com-
puting and generalization capabilities of these neurons have been further provided
in Chaps. 5 and 7 which will better demonstrate the motivation of this chapter.
References
1. Koch, C., Poggio, T.: Multiplying with synapses and neurons. In: McKenna, T., Davis, J.,
Zornetzer, S.F. (eds.) Single Neuron Computation, pp. 315-345. Academic, Boston, MA (1992)
2. Mel, B.W.: Information processing in dendritic trees. Neural Comput. 6 , 1031-1085 (1995)
 
Search WWH ::




Custom Search