Information Technology Reference
In-Depth Information
Fig. 8.8. Sigmo ıd function for different values of γ ( γ =50 , 10 , 5 , 3 , 2 , 1 , 0 . 5 , 0 . 2)
sigmoidal activation function: in the latter case, the output y i of neuron i is
given by: y i = tanh( γv i ), where γ is the slope at the origin of the sigmoıd,
and where v i is the potential of neuron i , defined, as in the previous chapters,
for a network of N neurons mutually connected, as
v i = N
w ij y j + I i ,
j =1
where I i is the constant input (bias) of neuron i .
Remark. In contrast to what was done in the previous chapters, for the
neural networks dedicated to optimisation, we will distinguish explicitly the
bias from the other inputs of the neurons.
The only difference with neurons used in Chaps. 2 to 4 is therefore in
thefactthattheslope γ might be different from 1. Note that the sigmoid
approximates the hard limiter when γ increases (Fig. 8.8); that is the reason
why the inverse of the slope can be considered as a temperature, by analogy
with the algorithms described in the previous sections.
It is sometimes preferable to use neurons with continuous outputs between
0 and 1. They can be obtained directly from the previous formula by the
change of variable ( v i +1) / 2.
When the outputs must be binary after convergence of the network, in-
stead of using the previous sigmoid function, for which 0 (or
1) and 1 are
Search WWH ::




Custom Search