Biomedical Engineering Reference
In-Depth Information
In every instant, the net evolves to reach a state
of lower energy than the current one.
It has been proved that the MREM model
with its associated dynamics always converges
to a minimal state. This result is particularly
important when dealing with combinatorial
optimization problems, where the application of
MREM has been very fruitful (López-Rodríguez
et al, 2006) (Mérida-Casermeiro et al., 2001a)
(Mérida-Casermeiro et al., 2001b) (Mérida-
Casermeiro et al., 2002a) (Mérida-Casermeiro
et al., 2002b) (Mérida-Casermeiro et al., 2003)
(Mérida-Casermeiro et al., 2004) (Mérida-Caser-
meiro et al., 2005).
the well-known learning rule of patterns in the
Hopfield's network is obtained. In fact, this is
equivalent to choose f ( x,y )=1 if x=y , and other-
wise f ( x,y )=0.
In what follows, we will consider the simi-
larity function given by: f ( x, y )=1 if x=y and -1,
otherwise.
In order to recover a loaded pattern, the network
is initialized with the known part of that pattern.
The network dynamics will converge to a stable
state (due to the decreasing of the energy func-
tion), that is, a minimum of the energy function,
and it will be the answer of the network. Usually
this stable state is next to the initial one.
MREM as Auto-Associative Memory
How to Avoid Spurious States
k X k be a set of patterns to be
loaded into the neural network. Then, in order to
store a pattern, X =( X 1 ,X 2 ,…,X N ), components of
the W matrix must be modified in order to make
X the state of the network with minimal energy.
Since energy function is defined, we modify
the components of matrix W in order to reduce
the energy of state V = X by the rule:
Now, let
When a pattern X is loaded into the network, by
modifying weight matrix W , not only the energy
corresponding to state V = X is decreased. This
fact can be explained in terms of the so-called
associated vectors.
Given a state V , its associated matrix is defined
{
(
)
:
= 1,
,
}
as
such that
.
G
= (
g
)
g
=
f V V
(
,
)
V
i j
,
i j
,
i
j
+ − , that
is, it is built by expanding the associated matrix
as a vector of N 2 components.
With this notation, the energy function can
be expressed as:
Its associated vector is
, with
A
= (
a
)
a
=
g
V
k
j
N i
(
1)
i j
,
E
.
w
= −
2
=
f
(
X
,
X
)
i j
,
i
j
w
i j
,
The coefficient 2 does not produce any effect on
the storage of the patterns, and it is here chosen
for simplicity. Considering that, at first, W =0,
that is, all the states of the network have the same
energy and adding over all the patterns, the next
expression is obtained:
1
K
(3.2)
E
(
V
)
= −
<
A
,
A
>
(
k
)
V
2
X
k
=
1
where
< ⋅ ⋅ > denotes the usual inner product.
,
K
(3.1)
Lemma 1. The increment of energy of a state
V when pattern X is loaded into the network, by
using Equation , is given by:
=
w
f
(
X
(
k
)
,
X
(
k
)
)
i j
,
i
j
k
=
1
Equation is a generalization of Hebb's postu-
late of learning , because the weight w i,j between
neurons is increased in correspondence with their
similarity.
It must be pointed out that, when bipolar neu-
rons and the product function are used, f ( x,y )= xy ,
1
(3.3)
E
(
V
) =
<
A
,
A
>
X
V
2
Lemma 2. Given a state vector v , we have
=
V . So
E V .
E
(V) =
(
)
A
A
V
Search WWH ::




Custom Search