Biomedical Engineering Reference
In-Depth Information
Single
Pulse
Single
Pulse
1
1
c i
2
2
3
3
Figure 7.5: Cyclic firing in a network of neurons.
7.3 ARTIFICIAL NEURAL NETWORKS
In 1943, McCulloch and Pitts published an article that showed how the basic actions of a neuron could
be generalized. Furthermore, they showed that these general units, called perceptrons , could be used to
perform logical functions such as and and or . Each perception is a system with a simple input/output
transfer function that roughly models the behavior of a neuron. Because of the simple transfer function,
many neurons (thousands or even millions) may simulated quickly. The general philosophy, unlike much
of this text, is that the higher-level functions of the brain are governed by the connections between neurons
and not the neurons themselves. So, to model the higher-level functions of the brain (learning, distributed
memory, planning, pattern recognition), it is not necessary to model the details of the neuron. Rather, it is
more computationally efficient to model neurons as simple units that are connected together in complex
ways as in Fig. 7.6. In the following section we will give a brief overview of neural networks composed
of perceptrons.
7.3.1 The Perception and Single Layer Networks
In general, a perception, i , may only be in one of two states , firing (1) or not firing (0). This state is the
output of the perception. But the perception may also take as inputs, the output of other perceptions.
Computing the output of a perception given the inputs is a two step process. First a weighted sum of the
inputs is computed as
N
=
w(i, j)a i
(7.3)
i
=
1
where a i are the N inputs and w(i, j) are weights between perceptions i and j . The second step is to
apply an activation function
Search WWH ::




Custom Search