Information Technology Reference
In-Depth Information
normalization of column j : y i,j := ( y i,j ) / ( i =1 y i,j ),
normalization of row i : y i,j := ( y i,j ) / ( j =1 y i,j ).
During convergence, an annealing is performed by increasing the parame-
ter β in order to force the neuron outputs to converge to binary values.
That type of approach is very e cient when the neuron outputs must be
under the form of a permutation matrix.
Very good results were obtained on other combinatorial problems [Ran-
garajan et al. 1999].
8.6.5.11 Mixed-Penalty Neural Networks
Recurrent neural networks with mixed penalties can be used to solve 0/1 linear
programming problems, which are combinatorial problems. Those networks
can be viewed as annealed analog Hopfield neural networks, in which the en-
ergy function includes additional terms to help the convergence. Those terms
are directly inspired from energy functions used in interior point methods for
linear programming problems [Gonzaga 1992].
Consider the constraint satisfaction problem consisting in the search for
Q binary variables that simultaneously satisfy M linear inequalities. That
problem is NP-complete. It can be defined as follows:
} Q .
g k ( y )
0 , k =1 ,...,M, y
∈{
0 , 1
One associates those variables to analog neuron outputs y i .
The energy of the neural network is given by [Privault et al. 1998a]
γ
2 δ k ( g k ( y )) 2
,
E = Q
y i )+ M
1
α (1
y i (1
δ k )ln
|
g k ( y )
|
i =1
k =1
where γ and α are positive real numbers, and where δ k
is equal to 1 when
constraint k is violated, and to 0 otherwise.
In that energy, the first term penalizes the fact that neuron outputs are
not binary.
The second term corresponds to the constraints. For the k th term in the
sum
The first part penalizes the violation of the k -th constraint: it is an exterior
penalty term.
The second part is an interior penalty term, which prevents the system
form being attracted to bad local minima. It is not associated to a con-
straint violation. On the contrary, it is applied to the k -th constraint only
when that constraint is satisfied; nevertheless, its goal is to keep the cur-
rent state of the network far from the constraint boundaries of equations
g k ( y ) = 0, and therefore to stay close to the analytic centre of the problem,
defined by
 
Search WWH ::




Custom Search