Information Technology Reference
In-Depth Information
(among summation, radial basis function, and product) and proposed compensatory
nonconventional neuron models in real and complex domain in 2010 and obtained
an accelerated learning. The unsupervised learning and image processing techniques
were further coupled with neural network to detect and circumvent image processing
and computer vision applications to test the effectiveness of the neurocomputing. In
2011, Tripathi used family of quasi-arithmetic means that covers the entire interval
of averaging operations between minima and maxima, for an aggregation function
and demonstrated root power-mean neuron (RPN 2011), which is naturally general
and includes various existing artificial neurons as its special cases. The successive
chapters of this topic consider these issues as basis for better explanation of high
dimensional neurocomputing with higher-order neural processing.
In due course of time, the idea of changing the error function and activation
function also evolved for better optimization in neurocomputing. Werbos and Titus
(1978), Gill andWright (1981) attempted different error functions in an optimization
scheme. Later in 1983, Rey stated that by varying the error function in an optimiza-
tion scheme, the result could be improved substantially. The statement was backed
by demonstration (Rey 1983) that an Absolute Error Function-based optimization
solved a curve-fitting problem more efficiently than the standard Quadratic Error
Function-based optimization. Fernandez (1991) implemented some new Error Func-
tions that were designed to counter the ill-effects local minima by weighting the
errors according to their magnitudes. Matsuoka (1991) reported BPA based on loga-
rithmic Error Function and elimination of local minima. Ooyen and Nienhaus (1992)
used an entropy-type Error Function and showed that it performs better than the
Quadratic Error Function-based BP for function approximation problems. Similar-
ily, wide variety of activation functions have also been considered for different opti-
mization schemes. Huang and Babri (1998) reported a result that spelt out an upper
bound to the number of hidden neurons for neural networks with arbitrary nonlinear
activation functions. Guarneri and Piazza (1999) reported an adaptive spline-type
activation function for standard neural network-based training. The varietation in
activation functions is more remarkable when dealing with high-dimensional neu-
rocomputing. The analyticity (diiferentiability) and boundedness of the functions
are imortant issues in high dimensions. A wide variety of activation functions have
been investigated including the analytic function [ 5 ], local analytic function [ 6 , 7 ],
split-type function [ 8 , 9 ], phase-preserving function [ 10 , 11 ] and circular-type func-
tion [ 12 ] for neural networks dealing with high-dimensional parameters. Chapter 3
present and critically analyze the error functions and activation functions for high-
dimensional neurocomputing.
1.2 High-Dimensional Neurocomputing
The evolution of neurocomputing has envisaged an advanced and a mechanized
world, where human life ismade better by developments in biologically inspired tech-
niques. These technologies make an important bridge to computer science and other
 
Search WWH ::




Custom Search