Information Technology Reference
In-Depth Information
properties can only be satisfied by multy-component numbers only in dimensions
1, 2, 4, and 8 that is in real numbers, complex numbers, quaternions and octonions.
Such generalization is not possible for three-dimensional numbers [ 27 , 28 ] and in
other higher dimensions. Here is another way for extending the dimensionality of
neural networks to dimensions is through the use of real-valued vectors which origi-
nates the similar effects on neural networks. This notion originates an N-dimensional
vector neuron, which can deal with N impinging signals as one cluster (single entity).
It conveys the potent computational power of N-dimensional vector neurons which
corresponds to the ability of the complex-valued neuron in two dimensions. The deci-
sion boundary of a N-dimensional vector neuron consists of N hyperplanes which
intersect orthogonally each other and divides a decision region into N equal sections,
as in the case of a complex-valued neuron. This direction of making the dimen-
sionality of artificial neuron high, taking account of task domain, will offer a high
computational power in high-dimensional neural networks.
The functions of activation for high-dimensional neurons have a very promenent
role in structure, characterestic and performance of corrosponding high-dimensional
neural networks. A wide variety of activation functions for high-dimension neu-
rons have been investigated including the phase-preserving function, circular-type
function, split-type function, locally analytic function, analytic function etc. [ 5 , 6 ,
8 , 12 , 29 ]. Some time these functions are contingent upon the problem and some
times depends on the architecture of artificial neuron. The quality of the functions
also put their impact on the nature of derived learning algorithms. Application of
the presented network to engineering problems is also challenging. The processing
of three or four dimensional vector data, such as color/multispectral image process-
ing, predictions for three-dimensional protein structures, and controls of motion in
three-dimensional space, will be the candidates from now on. Successive chapters
take care of very specific example problems of different domains of application to
present the motivation and theme of the topic.
1.3 Neurocomputing in Machine Learning
The researchers in the artificial intelligence and related areas have sought to see the
brain as simply a glorified computing machine. The highly distributed cooperative
computation deepens our understanding of the human brain and catalyzes for the
development of computing machinery. Using metaphors to describe the brain, dis-
tributed computing is successfully tried in neurocomputing that underlie intelligence
in the machine. The machine learning is mainly concerned with application of artifi-
cial intelligence over large data base. With innovations in computer technology, the
neurocomputing techniques have became very prominent for machine learning appli-
cations. The machine learning also assists us in designing intelligent systems and
to optimize their performance using example data or past experience. An intelligent
system should have ability to learn and adapt in a varying environment, then the
system designer need not to foresee and impart answers for all possible solutions.
 
Search WWH ::




Custom Search