Biomedical Engineering Reference
In-Depth Information
a lower level to an upper level, it corresponds to the introduction of the specific
part of this element to the more complex object.
4. The information element must join the object name and description. The name
and description of the object must be presented in all information structures into
which the model of the object enters. Actually, the name of the object must be
one of the features present in the description of this object.
In this chapter, we described the approach that makes it possible to construct
neural networks that satisfy the enumerated principles. At the basis of this approach
lies the idea that any object must be presented not by a separate symbol or a separate
neuron, but by a subset of neurons (neural ensemble). Information coding and
conversion of the codes in the ensembles of different hierarchical levels are of
great importance in this approach.
Two coding methods that can be used to present information in the associative-
projective neural networks were examined. One of them (local connected coding)
was tested during the solution of problems of pattern recognition, which will be
described later. This method has many advantages; however, to obtain the invari-
ance of the codes relative to the object position on the image, it is necessary to use
the special methods, which generate additional problems. To overcome this disad-
vantage, the second coding method (shift coding) was developed. This method
makes it possible not only to obtain the invariant to the shift object presentation but
also to automatically form the neural ensembles that correspond to the components
of the object. It seems to us that this property can be very useful both during the
solution of pattern recognition problems and during the creation of adaptive
information retrieval systems.
The problem of saving memory in which the synaptic weights of connections are
stored appears with neural networks that have a large quantity of neurons. We
solved this problem by partitioning neural networks into separate modules and
introducing special procedures for connecting these modules. The result in this case
is a linear (but not quadratic) increase in memory requirements, as a function of the
number of neurons.
References
1. Kussul E.M. Associative Neural Structure. Kiev: Nauk. Dumka, 1992, p. 144 (in Russian).
2. Amosov N.M., Baidyk T.N., Goltsev A.D. et al., Neurocomputers and Intelligent Robots, Ed.
Amosov N.M., Kiev: Nauk. Dumka,1991, p. 272 (in Russian).
3. Baidyk T.N. Information Capacity Investigation of the Associative Field with Ensembles of
Complex Structure, Neural networks and neurocomputers , Kiev, Institute of Cybernetics,
1992, pp. 39-48 (in Russian).
4. Kussul' E.M., Baydyk T.N. Design of a Neural-Like Network Architecture for Recognition of
Object Shapes in Images. Soviet Journal of Automation and Information Sciences (formerly
Soviet Automatic Control) , Vol. 23, No. 5, 1990, pp. 53-59.
5. Kussul E.M., Baidyk T.N. Some Functions of Associative Neural Field, Kiev, 1992, p. 21
(Preprint / Institute of Cybernetics; 92-5) (in Russian).
Search WWH ::




Custom Search