Biomedical Engineering Reference
In-Depth Information
limited stimulation must have its organized core, but it may also have a fringe
content, or meaning, that varies with the circumstances of arousal.
An individual cell or neuron set may enter into more than one assembly at
different times. The single assembly or small group of assemblies can be repeatedly
aroused when some other activity intervenes. In vision, for example, the perception
of vertical lines must occur thousands of times an hour; in conversation, the word
“the” must be perceived and uttered with very high frequency; and so on.
2.4 Perceptrons
Perceptrons, the architecture of classical neural networks, were invented and
proposed by Frank Rosenblatt in 1957 [ 7 , 8 ]. The perceptron function is a classi-
fication of different patterns. A pattern can be considered as a point in n -dimen-
sional space (where n coordinates correspond to different features of the object to
be classified).
In the most common case, the perceptron was presented as a structure with one
layer of neurons that are connected with inputs of the system. These connections
have the weight coefficients, which can be changed during the training process.
The goal is to find a set of weights w 0 , w 1 ,
, w n such that the output of the
perceptron is 1 if the input pattern vector belongs to class 1, and 0 if the pattern
vector belongs to class 0. The weights are modified in accordance with the
perceptron learning rule (or law). The neural network was trained using super-
vised training. In other words, for each input X to the network, the correct output
Y also was supplied.
The perceptron was one of the first neural network paradigms, and it is still used
occasionally. Their simple device structure and fast training convergence made
Rosenblatt perceptrons attractive to researchers. Rosenblatt stressed that percep-
trons were not developed to solve any practical task of pattern recognition or
artificial intelligence. It was the model of a human brain rather than an applied
technical device. However, it was clear that perceptrons could be used in practical
applications, too.
Often, the Rosenblatt perceptron is considered a one-layer perceptron [ 9 , 10 ].
Three-layered Rosenblatt perceptrons usually are mentioned in an historical con-
text [ 11 ], though Rosenblatt investigated mainly three-layered perceptrons. It is
interesting to build new classifiers on the base of the three-layered Rosenblatt
perceptron and examine whether they can compete with the modern neural
classifiers.
Analyzing the principal deficiencies of perceptrons, Rosenblatt mentioned the
following [ 8 ]:
...
1. An excessively large system may be required.
2. The learning time may be excessive.
3. The system may be excessively dependent on external evaluation during
learning.
Search WWH ::




Custom Search