Biomedical Engineering Reference
In-Depth Information
process elements, over the input function, which
ponders the weights with the activation of the
previous layer, it is applied an activation function,
being the hyperbolic and the sigmoid more used.
If the output of this function is positive and the
activation occurs, in that instant, the neurons that
get the activation will have as input the product
of the weight of the connection and the value of
the activation.
This way of propagating the activation of the
neuron works in a very punctual way, without
giving the idea of continuity that can be seen
in the natural model, where the activation flows
from a maximum point (action potential) to get
the “zero” (repose potential).
In this work it has been added to the classical
architecture of process element, the function of
the action potential, approximating it by a line
that goes from the output level of the activation
function to the zero level.
This model of ANN changes the conception
of the connections between neurons that go from
being a number (an only numerical value) to be
represented by a function, a line with negative
gradient that is characterized by the gradient
value (Figure 2).
Now, the neuron functioning has to be ex-
plained as a time function. That is, the activation
of a neuron is the join of a series of inputs in a
point of the time. This not only affects to the
output neurons in that instant, the activation af-
fects to the following neurons in n next instants
too (Figure 3).
The value of n depends on the gradient of that
connection.
In the Figure 4 it can be observed the line for-
mula that emulates the time decreased activation
of the natural neurons. In this formula y represents
the value of the neuron's output that is received
by the following neurons. The letter b represents
the output of the neuron's activation function. The
letter x represents the time from the activation and
the letter m is the line gradient that, as already
commented, is always negative.
At the moment when the activation occurs, the
time is 0 and the output is y = b , from this mo-
ment and depending on the gradient, the output
y decreases until 0 through a time x = t .
Training Process
When we have defined the new architecture, to
apply this type of ANN to solve real problems,
it's necessary a training algorithm that adjusts
the weights and, in this case, the gradients of the
line formula used for the time decreased activa-
tion for each Neuron in the ANN. To this new
kind of PE the learning algorithms used until
now aren't valid, because we need to calculate
the line gradients to each PE.
To train this RANN, it's used the Genetic
Algorithms (GA) technique. This technology has
Figure 2. Representation of the time decreased activation
w 1
x 1
N e u ro n O u tp u t
w 2
x 2
Output
Σ
F(Net)
Net = (x i *w i )
t
w n
x n
Search WWH ::




Custom Search