Information Technology Reference
In-Depth Information
Clearly, the first term in that equation is always negative or zero because
τ
I
>
0andd
y
i
/
d
v
i
>
0. As for the second term, it can be positive, negative
or zero; as a consequence, the derivative of the energy function with respect
to time can be positive [Takefuji 1992]. In order to avoid such a situation, a
slightly different motion equation is often used. It is given by
d
y
i
d
t
=
∂E
(
y
)
∂y
i
−
µ
i
, i
=1
,...,N.
As for the neuron outputs, the following equations are obtained:
d
y
i
d
t
=
µ
T
(1
)
∂E
(
y
)
∂y
i
y
2
i
−
−
, i
=1
,...,N.
Thus, the dynamic properties of an analog Hopfield neural network are
governed by a system of non-linear differential equations.
With the above modification, it can be easily proved that any change of
state (i.e., of output) of a neuron decreases, or keeps constant, the network
energy,
d
y
i
d
t
=
d
v
i
d
t
2
N
N
N
d
E
d
t
=
∂E
∂y
i
d
y
i
d
t
=
d
v
i
d
t
d
y
i
d
v
i
−
τ
i
−
τ
i
i
=1
i
=1
i
=1
d
E
d
t
≤
0
.
In other words, the system of equations constrains the energy function
E
to monotonically decrease to a local minimum. More precisely, the system has
the following property: from an initial point
x
inside the hypercube
}
N
,
the dynamic system converges to a local minimum of the energy function
E
(
y
), located either on a vertex or on the surface of the hypercube. When
the attractor is on the surface, any vertex of that surface can be chosen as a
solution since the energy is constant on that surface. Therefore, any solution
where the network converges is locally optimal.
In practice, with a dedicated hardware implementation of such a network,
the convergence times are of the order of a few nanoseconds or microseconds.
That allows generally several resolutions of the problem, starting from differ-
ent initial points. That strategy can sometimes be e
cient, but generally the
attractors are not satisfactory.
To solve e
ciently combinatorial optimization problems, it is often prefer-
able to use continuous neurons with a variable slope. Thus, by evolving inside
the hypercube of solutions, and not only on its vertices as a binary neural
network does, the high-energy local minima are better avoided during con-
vergence. Moreover, it has been shown that a continuous neural network is
much faster and reliable than a binary neural network with an asynchronous
update of the neurons [Ayer et al. 1990; Lee et al. 1991a]. Minimizing
E
(
y
)
with a continuous
y
reduces significantly the probability of being trapped in
{−
1
,
1
Search WWH ::
Custom Search