Digital Signal Processing Reference
In-Depth Information
p 2
p 2
p 2
+
+
+
+
OR
OR
+
+
XOR
p 1
AND
+
+
p 1
p 1
+
Fig. 12.6
XOR function as an example of non-linearly separable problem
p 1
p 2
w 1
w 2
purelin
s
y
w R
p R
b
1
Fig. 12.7
ADALINE neuron model
• They may only learn linear relationships between input and output vectors.
• Even if a perfect solution does not exist, the linear network will minimize the
sum of squared errors, if the learning rate is sufficiently small.
• The network will, however, find as close a solution as it is possible given the
linear nature of the network's architecture; this property holds because the error
surface of a linear network is a multidimensional parabola (since parabolas have
only one minimum, a gradient descent algorithm such as the least mean square
rule must produce a solution at that minimum).
To solve more complex problems for which single neurons have not sufficient
power it is commonly suggested that the neurons are connected together to built
networks of various types. The most popular and frequently used structure is the
multilayer feed-forward perceptron that is shown in Fig. 12.8 . The information
(signals) flows in one direction only (feed-forward), there are no feed-back con-
nections. The ANN can be fully connected (all neurons in a given layer are
connected to all neurons of the next layer) or can have some connections missing,
which would mean that the corresponding weighting coefficients for broken syn-
apses are equal to 0.
The ANNs with linear neurons (ADALINE or hard limited) have limited
abilities. Much higher interest is revealed to non-linear nets with the neuron model
equipped
with
so
called
squashing
activation
function
applied
to
the
sum
 
Search WWH ::




Custom Search