Biomedical Engineering Reference
In-Depth Information
them will be considered 0). This energy function
determines the behavior of the net.
by N neurons, where the state of each neuron i is
defined by its output V i ( i= 1,…, N ), taking values
in any finite set
{
}
=
m
,
m
,
,
m
. This set
1
2
L
Hopfield's Associative Memory
does not need to be numerical.
T The s t a t e of t The n e t wo r k , a t t i ime
t , is g iven by a N - d i mensional vector,
k X k , a set of bipolar
patterns to be loaded into the network. In order
to store these patterns, weight matrix W must be
determined. This is achieved by applying Hebb's
classical rule for learning. So, the change of the
weights, when pattern X = ( X i ) is introduced into
the network, is given by
(
)
Let us consider
{
:
= 1,
,
}
V . Associated to
every state vector, an energy function, character-
izing the behaviour of the net, is defined:
( ) = (
t
V t
( ),
V
( ),
t
,
V
( ))
t
N
1
2
N
1
N
N
N
∑∑
E
(
V
)
= −
w
f V V
(
,
)
+
(
V
)
(2.1)
i j
,
i
j
i
i
2
w
=
X X
. Thus, the
i
=
1
j
=
1
i
=
1
i j
,
i
j
final expression for the weights is:
where w i,j is the weight of the connection from the
j -th neuron to the i -th neuron, and
  
can be considered as a measure of similarity be-
tween the outputs of two neurons, usually verify-
ing the following similarity conditions:
f
:
×
K
(
k
)
(
k
)
(1.2)
w
=
X
X
i j
,
i
j
k
=1
In this case, the energy function that is mini-
mized by the network can be expressed in the
following terms:
1.
For all x , f ( x,x )= c .
f is a symmetric function: for every x,y ,
f ( x,y )= f ( y,x ).
2.
1
N
N
K
∑∑∑
(1.3)
(
k
)
(
k
)
E V
(
) =
X
X
V V
i
j
i
j
2
3.
If x y , then f ( x,y )≤ c .
i
=1
j
=1
k
=1
In order to retrieve a pattern, once the learning
phase has finished, the net is initialized with the
known part of the pattern (called probe). Then, the
dynamics makes the network converge to a stable
state (due to the decrease of the energy function),
corresponding to a local minimum. Usually this
stable state is close to the initial probe.
If all input patterns form an orthogonal set,
then they are correctly retrieved, otherwise some
errors may happen in the recall procedure. But,
as the dimension of the pattern space is N , there
is no orthogonal set with cardinality greater than
N . This implies that if the number of patterns
exceed the number of neurons, errors may oc-
cur. Thus, capacity can not be greater than 1 in
this model.
and θ i :   are the threshold functions. Since
thresholds will not be used for content address-
able memory, henceforth we will consider θ i be
the zero function for all i I .
The introduction of this similarity function
provides, to the network, of a wide range of pos-
sibilities to represent different problems (Mérida
et al., 2001) (Mérida et al., 2002). So, it leads to
a better representation than other multi-valued
models, like SOAR and MAREN (Erdem et al.,
1996) (Ozturk et al., 1997), since in those models
most of the information enclosed in the multi-val-
ued representation is lost by the use of the sign
function that only produces values in {-1,0,1}.
It is clear that MREM, using bipolar ( =
{1,1}) or bi-valued ( = {0,1}) neurons, along
with the similarity function given by f ( x, y ) = xy
and constant bias functions, reduces to Hopfield's
model. So, this model can be considered a power-
ful generalization of Hopfield's model.
MREM Model with Semi-Parallel
Dynamics
The Multivalued REcurrent Model (MREM)
consists of a recurrent neural network formed
 
Search WWH ::




Custom Search