Information Technology Reference
In-Depth Information
of computational devices to work on further increasing storage capacity and speed
of execution of elementary operations. Nevertheless, addressing these challenges
with modern digital computers turns out to be ineffective and often completely
impossible. An alternative to this race “computational complexity of the problem—
computer performance” emerged in the end of the last century due to further
development of the ideas of McCulloch and Pitts.
4.3 Biologically Inspired Information Processing Devices:
Neural Networks and Neurocomputers
In 1943 M
Culloch and Pitts proposed the neural network approach to processing
information based on the current knowledge of the structure of the cerebral cortex.
By the time it was already known that the cortex is a complex system of
interconnected nerve cells—neural network (Fig. 4.2 ). Each neuron has
branches—dendrites—through which a neuron receives signals from other neurons.
They are summed up algebraically (i.e., taking into account the sign of the incom-
ing signal), and if the sum exceeds a certain threshold, the neuron transmits via the
output branch (axon) the signal which, in principle, may reach all other network
nodes. The signals at each neuron's input are controlled by groups of cells—
synapses—that determine the structure of the neural network, i.e., the particular
wiring of neurons. This, in turn, determines what specific tasks are solved in the
cerebral cortex. Synapses play a fundamental role in learning processes. Their
number and distribution varies significantly in the course of infant development.
The neural network model of McCulloch and Pitts gives a simplified description
of the structure of the cerebral cortex. A neural network is a system of elementary
processors—formal neurons (Fig. 4.2 ). Each of them receives either positive or
negative signals from all neurons of the network, weighted to simulate synaptic
connections. The neuron sums up these signals algebraically and, if their sum
exceeds a specified threshold, generates a pulse which propagates through the
network. The initial state of the network is given by modifiable weights which
determine the structure of the problem to be solved by the network. Once the initial
state of the network is defined, the network structure evolves over time, and its final
state represents the solution of the specified task. A remarkable feature of the neural
network is that information processing is carried out simultaneously by all its
neurons, i.e., with tremendous parallelism unrivaled even by modern semiconduc-
tor multiprocessor computers. In contrast to the von Neumann computer, the
particular task solved by the network is determined not by an input program, but
rather by the initial states of neurons and the network structure—the system of
weights with which neuron signals are transmitted over the network.
A major step in the development of neural network concepts was made in 1962
by the American neuroscientist Frank Rosenblatt. He suggested an arbitrary neural
network structure, called the perceptron, which was based on three types of neurons
с
Search WWH ::




Custom Search