Information Technology Reference
In-Depth Information
Fig. 8.9. Activation function for ρ =1
asymptotic values, one can also use the following activation function, which
is continuous on [0,1]:
0
if v i =0
1+sin π v ρ
if 0 <v i
1
2
1
2
y i =
1
if v i = ρ.
Figure 8.9 shows the shape of this activation function when ρ =1.In
that activation function, ρ is a strictly positive real number that controls
the maximal slope of the function; the latter is continuous, differentiable and
monotonic increasing. When ρ tends to 0, the activation function tends to a
step.
8.6.2 Architectures of Neural Networks for Optimisation
Recurrent neural networks are the neural techniques that are the most fre-
quently used for solving optimization problems. As explained in Chap. 2, the
graph of the connections of those networks has at least a cycle. For opti-
mization, those networks have no control input: they evolve with their own
dynamics, from an initial state (often random), to an attractor that encodes
a solution of the optimization problem. We will show later that simulated
Search WWH ::




Custom Search