Information Technology Reference
In-Depth Information
where k denotes the k-th fundamental memory. The increment sgn .x j
kC1 x kC1 / may
have the values C1 or 1. Thus, the weight w ji is increased or decreased by 1 each
time a new fundamental memory is presented to the network. The maximum number
of fundamental memories p that can be retrieved successfully from an associative
memory of N neurons (error probability <1%) is given by p max
N
2 lnN [ 7 ]. Thus,
for N D 16 the number of fundamental memories should not be much larger than
2.
D
The equivalence between Eqs. ( 7.4 ) and ( 10.2 ) is obvious and becomes more
clear if in place of sgn .x j
kC1 x kC1 /, the variable kC1 D˙1 is used, and if kC1
is also multiplied with the (step) increment x. Thus the learning of the weights
of an associative memory is a Wiener process. Consequently, the weights can
be considered as Brownian particles and their mean value and variance can be
calculated using the Central Limit Theorem (CLT) as in Sect. 7.2.2 .
10.1.2
Mean Value and Variance of the Brownian Weights
The eigenvalues and the eigenvectors of the weight matrix W give all the informa-
tion needed to interpret the storage capacity of a neural associative memory [ 7 ].
The eigenvalues i i D 1;2; ;N of the weight matrix W are calculated from
the solution of f./ DjW IjD0. The eigenvectors q of matrix W satisfy
Wx D q ) .W I/q D 0. Then according to the spectral theorem one gets
[ 78 ]:
X
M
W D QQ T
i q i q i
) W D
(10.3)
iD1
where i is the i-th eigenvalue of matrix W and q i is the associated N 1
eigenvector. Since the weight matrix W is symmetric, all the eigenvalues are real
and the eigenvectors will be orthogonal [ 78 ]. The input vector x which is presented
to the Hopfield network can be written as a linear combination of the eigenvectors
q i , i.e. x D P iD1 i q i . Due to the orthogonality of the eigenvectors the convergence
of the Hopfield network to its fundamental memories (attractors)
x gives
x D
P iD1 i i q i . The memory vectors x are given by the sum P iD1 i i q i , which
means that the eigenvectors of the symmetric matrix W consist of an orthogonal
basis of the subspace which is spanned by the memory vectors x.
It can be shown that the memory vectors become collinear to the eigenvectors
of matrix W , thus constituting an orthogonal basis of the space V if the following
two conditions are satisfied: (a) the number of neurons N of the Hopfield network is
large (high dimensional spaces), and (b) the memory vectors are chosen randomly.
The following lemmas give sufficient conditions for the fundamental memory
vectors (attractors) to coincide with the eigenvectors of the weight matrix W :
Search WWH ::




Custom Search