Digital Signal Processing Reference
In-Depth Information
w 1i
x(1)
w 13
x(i)
w 2i
w 12
N
w 23
x
(i) = f( (i), x
τ
t (i))
t+1
t
j=1
x(2)
τ t (i)=
x
t (j) w ji
w 11
= 0
.
.
w 22
= 0
w 33
= 0
x(3)
x(1)
x(2)
w Ni
.
w 32
w 21
x(N)
w 31
(a)
(b)
Figure 6.13
(a) Hopfield neural network; (b) propagation rule and activation function for the
Hopfield network.
computation only for i<j .
2. Initialization: Draw an unknown pattern. The pattern to be learned
is now presented to the network. If p =[ p 1 ,p 2 ,...,p N ] is the unknown
pattern, write
x 0 ( i )= p i ,
1
i
N
(6.43)
3. Adaptation: Iterate until convergence. Using the propagation rule and
the activation function for the next state we get
.
N
x t+1 ( i )= f
x t ( j ) w ij , x t ( i )
(6.44)
j=1
This process should be continued until any further iteration will produce
no state change at any node.
4. Continuation: For learning a new pattern, repeat steps 2 and 3.
There are two types of Hopfield neural networks: binary and con-
tinuous. The differences between the two of them are shown in table
6.2.
In dynamic systems parlance, the input vectors describe an arbitrary
initial state, and the reference vectors describe attractors or stable states.
The input patterns cannot leave a region around an attractor, which is
called the basin of attraction.
 
Search WWH ::




Custom Search