Digital Signal Processing Reference
In-Depth Information
7.4 Artificial Neural Networks
Artificial neural networks are nonlinear adaptive devices inspired by models
of the behavior of the brain. In a nutshell, these networks can be understood
as being originated by the interconnection of relatively simple nonlinear pro-
cessing units, the neurons. In view of the scope of the topic, we restrict
our attention to a pair of classical neural networks: the MLP and the RBF
network.
7.4.1 A Neuron Model
We start by considering a neuron model that bears a significant degree of
resemblance to the seminal McCulloch-Pitts proposal [206]. This neuron is
the basic unit underlying the important solution known as perceptron [254],
which played a most relevant historical role in the context of learning effi-
cient classifiers. The neuron model is simple: a number of input stimuli are
linearly combined according to a set of weights and afterward suffer the
action of a nonlinear memoryless function. Intuitively, the model evokes the
effects of the synapses, stimuli integration, and the existence of a threshold
related to activation. Figure 7.6 illustrates this model.
Mathematically, the input-output response of the neuron is given by
φ w T x
y
(
n
) =
(
n
)
(7.26)
where
w is a synaptic weight vector
x
(
n
)
is the neuron input vector
φ
( · )
is the nonlinear function known as activation function
The activation function contains a threshold parameter that can be elim-
inated by considering the existence of an input always fixed at
+
1andan
x 0 ( n )
w 0
x 1 ( n )
w 1
u ( n )
y ( n )
( . )
Σ
x K ( n )
w K
FIGURE 7.6
Neuron model.
 
 
Search WWH ::




Custom Search