Information Technology Reference
In-Depth Information
We will show below that a recurrent neural network, such as the Hopfield
network can minimize the energy function.
8.6.4 Recurrent Hopfield Neural Networks
A Hopfield neural network [Hopfield 1982, 1984], as defined in Chap. 4, has
one layer of fully connected neurons; a delay of one time unit is associated
to each connection; the state vector being the vector of neuron outputs, the
order of the network is equal to the number of neurons.
Principle of Hopfield Neural Networks for Optimization
When applied to optimization, the network is used as follows: from an initial
state, the network evolves freely towards an attractor, which is generally a
time-independent state (a fixed point of the dynamics). Then the network is
said to have converged: the outputs of neurons no longer evolve any longer.
More details on the convergence properties can be found in the papers of
Goles [Goles 1995].
The dynamics of the network is generally asynchronous : between two in-
stants of time, a single neuron, randomly selected, is updated; in other words,
its potential is computed, and its output is appropriately updated.
When those networks are used to solve optimization problems, the weights
of the connections are found analytically from the formulation of the optimiza-
tion problem; generally, they are directly derived from the energy function
associated to the problem, as will be exemplified below. Moreover, in the at-
tractor where the network has converged, the outputs of the neurons code for
a solution of the optimization problem.
8.6.4.1 Binary Hopfield Neural Networks
The neural network initially proposed by Hopfield was a discrete-time recur-
rent neural network, with a symmetrical connection matrix (matrix of w ij
coe cients) with a null diagonal [Hopfield 1982]. It has been presented in
Chap. 4.
Since each connection has a delay of one time unit, the potential of neuron
i at time k is the weighted sum of the activities of the other neurons at time
k − 1: v i ( k )= j = i w ij y j ( k − 1) + I i ,where y i ( k ) is the output of neuron i
at time k,w ij is the weight of the connection between neuron j and neuron i ,
and I i is the bias (constant input) of neuron i .
The attractors on which the network converges are the minima of a func-
tion, called network energy, defined by
N
N
N
1
2
E ( y )=
w ij y i y j
I i y i ,
i =1
j =1
i =1
Search WWH ::




Custom Search