Information Technology Reference
In-Depth Information
of cortex neurons, N is the number of neurons in CX1 as same as CX2, f expresses
a step function. Eq. (3) endows CX2 associative function for input patterns and long-
term memory formation function for output patterns of hippocampus.
The learning rule of the synapses in CX2 is given by Eq. (4) which is a Habbian
rule using the different output of neurons in time t and
t
1
.
cx
ij
2
cx
2
cx
i
2
cx
j
2
(4)
Δ
w
=
α
x
(
t
)
x
(
t
1
.
hc
Where
is a parameter of learning rate.
Hippocampus composes DG, MCNN and CA1 neurons. DG executes pattern
encoding (Eq. (5)) with a competition learning (Eq. (6)).
α
hc
L
random
(
0
,
(
initially
)
(
)
dg
i
x
(
t
)
=
)
.
(5)
N
j
=
dg
ij
cx
1
cx
j
1
dg
f
w
x
(
t
)
θ
L
(
generally
0
dg
ij
cx
1
dg
i
cx
j
1
(6)
Δ
w
=
β
x
(
t
)
x
(
t
)
.
hc
i w denotes the weight of connection between the i th neuron in CX1
(output) and the j th neuron in DG,
dg
cx
1
Where
cx
j
1
x
is the output of the i th neuron in CX1,
dg
.
CA3 accepts the encoded information from DG and executes chaotic processing of
storage and recollection with MCNN. It consists of two CNN layers which dynamics
is given by Eq. (12) - Eq. (15) and one output layer which neuron's output is given by
Eq. (7) - Eq. (9).
is a threshold value of DG neurons,
β
is a learning rate and
α
<
β
θ
hc
hc
hc
n
=
ca
ij
3
out
cnn
1
cnn
j
1
k
=
arg
max
w
(
2
x
(
t
)
1
.
(7)
i
j
0
n
=
ca
ij
3
out
cnn
2
cnn
j
2
k
=
arg
max
w
(
2
x
(
t
)
1
.
(8)
i
j
0
1
  L
L
  
(
i
=
k
)
ca
i
3
out
x
(
t
)
=
)
.
(9)
0
  
  
(
i
k
Here the j th neuron in CNN1 x cnn1 j ( t ) and the j th neuron in CNN2 x cnn2 j ( t ) are used to
transform the output of MCNN by the i th neuron in output layer of MCNN
)
c
i
3
out
x
(
t
. w ca3out.cnn1 ij and w ca3out.cnn2 ij denote the connections between the output
Search WWH ::




Custom Search