Information Technology Reference
In-Depth Information
10.2.3
Applications of the Stochastic Associative Memory
Model
The proposed stochastic associative memory model has also practical signifi-
cance:
1. Stochastic neuron models enable the study and understanding of probabilistic
decision making [ 31 ].
2. Stochastic neuron models can explain instabilities in short-term memory and
attentional systems. These become particularly apparent when the basins of
attraction become shallow or deep (in terms of an energy profile) and have
been related to some of the symptoms of neurological diseases. The approach
thus enables predictions to be made about the effects of pharmacological agents
[ 43 , 44 , 75 , 114 , 115 , 171 ].
3. Stochastic neuron models can improve neural computation towards the direction
of quantum computation [ 137 , 165 ].
10.3
Attractors in Associative Memories with Stochastic
Weights
It will be examined how the concept of weights that follow the QHO model affects
the number of attractors in associative memories. The storage and recall of patterns
in quantum associative memories will be tested through a numerical example and
simulation tests.
1. Superposition of weight matrices:
Assume that the fundamental memory patterns are the following binary vectors
s 1 D Œ1;1;1, s 2 D Œ1;1; 1, s 3 D Œ1; 1;1, which are linearly independent
but not orthogonal. Orthogonality should be expected in high dimensional vector
spaces if the elements of the memory vectors are chosen randomly. In this example,
to obtain orthogonality of the memory vectors, Gramm-Schmidt orthogonalization
is used according to
k1
X
u j s k
u j u j
u k D s k
u j
(10.12)
jD1
This gives the orthogonal vectors u 1 D Œ1;1;1, u 2 D Œ 3 ; 3 ; 3 , u 3 D Œ1; 1;0.
The weight matrix which results from Hebbian learning asymptotically becomes a
Wiener process. Matrix W is W D
1
3 Πu 1 u 1 C u 2 u 2 C u 3 u 3 , i.e.
 
Search WWH ::




Custom Search