Information Technology Reference
In-Depth Information
computational capabilities for solving the majority of practical problems. Only in
some rare cases some additional hidden layers could be needed. This also holds in
time series analysis and forecasting applications.
Accidentally, the concept of the perceptron emerged at that time when the
difficulties in solving complex intelligent problems using classical computing
automata of John von Neumann had grown to be insurmountable. It was realized
that, for solving such problems, massive, highly parallel, distributed data
processing systems are required. Building of such highly sophisticated
computational systems was already put on the agenda of some leading research
institutions. However, discovery of the perceptron as a simple computing element
that can easily be mutually interconnected with other perceptrons to build huge
computing networks was viewed as a more promising way for development of the
massive parallel computational systems needed at that time. Minsky and Papert
(1969) expected that the use of more complex, MLP configurations could help in
building the future intelligent, general-purpose computers with learning and
cognition capability. This was very soon proven using perceptrons as the basic
elements of ADALINE (A) in single-layer perceptrons to build a multi-layer
MADALINE architecture (see Figure 3.3).
x 1
ADALINE
(A)
ADALINE
(A)
y 01
ADALINE
(A)
ADALINE
(A)
ADALINE
(A)
x 2
y 02
:
:
ADALINE
(A)
outputs
ADALINE
(A)
ADALINE
(A)
x n
Inputs
ADALINE
Layer-1
ADALINE
Layer-3
ADALINE
Layer-2
Figure 3.3. ADALINE-based MADALINE
In 1950, Rosenblatt used a single perceptron layer for optical character
recognition. It was a multiple input structure fully connected to the perceptron
layer with adjustable multiplicative constants w called weights. The input signals,
before being forwarded to the processing elements ( i.e. perceptrons) of the single
network layer, are multiplied by the corresponding values of the weighting
elements. The outputs of the processing units build a set of signals that determine
the number of pattern classes that can be distinguished in the input data sets by the
linear separation capability of perceptron layer. For weight adjustment Rosenblatt
used the delta rule.
Search WWH ::




Custom Search