Information Technology Reference
In-Depth Information
An interesting confirmation of the neural network architecture of Belousov-
Zhabotinsky media was obtained on the basis of neural networks describing the
characteristics of human vision.
Vision in mammals and, in particular, in human represents a complicated
photobiological process. The image of the environment is projected by the optical
self-regulating system of the eye, converted and compressed by a set of horizontal
amacrine and ganglion cells, and then transferred by the optic nerve. Optic nerves
of two eyes overlap, sharing some information, and pass it through the lateral
geniculate body to the visual cortex, where the image is integrated into a unified
whole. According to the generally accepted concepts, the cortex preprocesses
information: it enhances the contours of image fragments, the lines of the orienta-
tion, the boundaries between individual blocks, etc. Further interpretation of the
external information is a complex psychophysiological process, carried out by the
cerebral cortex.
A distinctive feature of human vision is visual fields in which information can be
amplified in the center of the field and suppressed at the periphery (“on center—off
surround”) or, conversely, suppressed in the center and amplified on the periphery
(“off center—on surround”). As a result, the retina does not perceive uniform
diffuse illumination, but it does capture point and ring structures, the boundaries
of dark and light.
Among the two-dimensional neural networks with lateral (side) interaction of
special importance are those that allow for simulating specific functions of the
human brain and, in particular, features of human vision.
In the late 1960s, Pozin and colleagues carried out a detailed study of the neural
networks described by kinetic equation:
s i
t ¼
as i þ
FpðÞþ
I i :
Here, s i is the state (potential) of the i th neuron, F ( p i ) is a step function
describing the state of the neuron depending on the sum of signals from all other
neurons:
X
p i ¼
T ij s i ,
j
and I i is external stimulus on the i th neuron.
This model (Fig. 4.11 ) represents a neural network with excitatory and inhibitory
inputs. More precisely, the distribution of excitatory and inhibitory signals is
described by a coupling function g ( x ) which depends on the distance between
neutrons on the surface of the network. Extended one- and two-dimensional spatial
effects on the neural network were considered. The dimensions of the input signal
features were significantly greater than interneuron distances. Therefore, a neural
network can be viewed as a continuous, homogeneous medium.
Search WWH ::




Custom Search