Information Technology Reference
In-Depth Information
8.6.5.8 Lagragian Neural Networks
Kuhn and Tucker showed that the solutions of a continuous optimization
problem with constraints are the saddle points of an associated function, called
Lagrangian [Kuhn et al. 1951]. Therefore, those problems can be solved by
gradient methods applied to their Lagrangian.
Lagrangian neural networks are recurrent neural networks with continuous
outputs, which have the specific property of converging to the saddle points
of a Lyapunov function, instead of converging to local minima. They can be
used to solve combinatorial optimization problems [Maa et al. 1992; Mjolsness
et al. 1990, 1991; Zhang 1992; Rangarajan et al. 1996].
Example of the TSP
An augmented lagrangian can be defined for the TSP encoded as for the
Hopfield network,
N
1
N
N
N
λ 0
i
+
λ 1
j
L ( y ,λ,µ )= Cost +
y i,j
1
y i,j
i =1
j =1
j =1
i =1
N
1 2
2
N
N
N
+ 1
2
+ 1
2
µ 0
i
µ 1
i
y i,j
1
y i,j
.
j =1
j =1
j =1
i =1
λ 0
λ 1
In that equation, parameters
{
i }
,
{
i }
are the Lagrange multipliers,
µ 0
µ 1
while parameters
are the penalty weights.
Continuous equations for the neuron outputs and the Lagrange multipliers
are defined as follows:
{
i }
,
{
i }
1
N
N
N
d y i,j
d t
+ µ 1
j
d i,k y k,j + λ 0
+ λ 1
j
+ µ 0
i
=
µ
y i,j
1
y i,j
i
k =1
k = i
j =1
i =1
N
d λ 0
i
d t =
ρ
y i,j
1
j =1
ρ N
1 .
d λ 1
j
d t =
y i,j
i =1
In the above equations, µ and ρ regulate the convergence speed of the
network.
That approach is far less sensitive than Hopfield networks to the penalty
parameters associated with the constraint violations. However, convergence
times can appear to be much longer.
Search WWH ::




Custom Search