Information Technology Reference
In-Depth Information
Fig. 10.2 Partition of the real
axis x 2 R into an infinite
number of symmetric
triangular possibility
distributions (fuzzy sets)
for high dimensional spaces N
! 0 and the vectors x i and x k will be practically
orthogonal.
Thus, taking into account the orthogonality of the fundamental memories x k and
Lemma 1 , it can be concluded that memory patterns in high dimensional spaces
practically coincide with the eigenvectors of the weight matrix W .
10.1.3
Learning Through Unitary Quantum Mechanical
Operators
As already mentioned, the update of the weights w ij of the associative memories is
performed according to Eq. ( 10.2 ), which implies that the value of w ij is increased
by or decreased as indicated by sgn .x i x j /. If the weight w ij isassumedtobea
stochastic variable, which is described by the probability (possibility) distribution
of Fig. 7.1 b, then the term sgn .x k x k / of Hebbian learning can be replaced by a
stochastic increment.
A way to substantiate this stochastic increment is to describe variable w ij
in terms of a possibility distribution (fuzzy sets). To this end, the real axis x,
where the w ij takes its values, is partitioned in triangular possibility distributions
(fuzzy sets) A 1 ;A 2 ; ;A n1 ;A n (see Fig. 10.2 ). These approximate sufficiently
the Gaussians depicted in Fig. 7.1 b. Then, the increase of the fuzzy (stochastic)
weight is performed through the possibility transition matrix R i n which results into
A n D R i n ıA n1 , with ı being the max-min operator. Similarly, the decrease of
the fuzzy weight is performed through the possibility transition matrix R n , which
results into A n1 D R i n ıA n [ 163 , 195 ].
Search WWH ::




Custom Search