Biomedical Engineering Reference
In-Depth Information
environment [ 75 ]. The user supplies a utility function U and a stochastic model
F of the environment to be controlled. Dynamic programming is used to solve for
another function, J , which serves as a secondary or strategic utility function. The
key theorem is that any strategy of action that maximizes J in the short term will
also maximize the sum of U over all future times. Adaptive critic designs are
defined more precisely as designs that include a critic network - a network whose
output is an approximation to the J function, or to its derivatives, or to something
very closely related to the two. There are different realizations of this approach;
it is possible to say that an adaptive critic family of methods has developed.
For example, adaptive-critic-based neural networks have been used to design a
controller for a benchmark problem in aircraft autolanding [ 76 ], to steer an agile
missile [ 77 ], to build neurocontrollers for turbo generators in a multimachine power
system [ 78 ], and to construct new learning methods (creative learning) for intelli-
gent autonomous mobile robots [ 79 ].
We plan to use adaptive critic design in our future investigations of microme-
chanics and microfabrics control.
2.7 RTC, RSC, LIRA, and PCNC Neural Classifiers
We have worked out effective neural network classification systems. They have
been developed since 1970 and used as control systems for mobile autonomous
robots, in texture recognition tasks, in voice-based identity verification tasks, in
handwriting and face recognition, and in the new micromechanics area. The most
interesting neural classifiers are Random Threshold Classifier (RTC) [ 80 , 81 ],
Random Subspace Classifier (RSC) [ 12 , 82 ], Neural Classifier LIRA (Limited
Receptive Area) [ 12 - 19 ], and PCNC Neural Classifier [ 83 , 84 ]. In this topic, we
describe all these models, obtain results, and summarize all of the advantages of
our approach.
References
1. Hecht-Nielsen, R., Neurocomputing. Addison-Wesley, 1991, pp. 433.
2. Kohonen, T., Self-Organization and Associative Memory, Springer-Verlag, Berlin, 1984.
3. Wasserman, Philip D., Neural Computing: Theory and practice, Van Nostrand Reinhold, Inc.,
1989.
4. Ed., Browne, A., Neural Network Analysis, Architectures and Applications. Institute of
Physics Publishing, 1997, pp. 264.
5. McCulloch, W. S., and Pitts W. A., Logical Calculus of the Ideas Immanent in Nervous
Activity. Bulletin of Math. Biophysics, 5, 1943, pp. 115-133.
6. Hebb, D. O., The Organization of Behavior. A Neuropsychological Theory. Wiley, New York,
1949, pp. 335.
7. Rosenblatt, F., “The perceptron: A Probabilistic Model for Information Storage and Organi-
zation in the Brain,” Psychol. Rev., 65, pp. 386-408, 1958.
Search WWH ::




Custom Search