Information Technology Reference
In-Depth Information
Where
S
denotes input stimuli from sensory cortex and thalamus to
i
th neuron in
amygdala,
A
denotes the output of
i
th neuron in amygdala,
O
denotes the output of
i
th neuron in orbitofrontal cortex,
E
is the output of amygdala which control the
internal state of MCNN by tuning damping parameters as same as our conventional
method.
V
,
are variables of connections in Amyg-
dala model. Reward
R
coming from other area of brain is used to renew Eq. (21) and
Eq. (22) which belongs to reinforcement learning algorithm.
Suppose that the unstable degree of a CNN provides a reward
R
to amygdala
model, then an emotional control can be realized by Eq. (18) - Eq. (27).
α
,
β
i
W
are learning rates,
AMY
AMY
i
Δ
N
∑
+
*
i
1
(
x
(
t
+
1
−
x
(
t
))
j
j
j
=
i
*
Δ
N
S
=
,
(23)
i
Δ
N
where
i
=
0
L
,
N
−
1
.
AMY
n
Δ
N
=
.
(24)
N
AMY
1
.
0
L
L
(
F
>
n
)
⎩
⎨
⎧
R
(25)
R
=
.
0
.
0
(
else
)
N
i
∑
=
AMY
g
(
S
−
θ
)
i
R
0
F
=
.
(26)
N
AMY
⎧
chaotic
L
L
(
E
>
else
θ
)
(27)
CNN
state
:
AMY
.
⎩
⎨
non
−
chaotic
(
)
Where
S
i
in Eq. (23) corresponds to input of Amygdala (Eq. (19) and Eq. (20)),
x
j
denotes the output of chaotic neuron in CNN layers,
N
AMY
is the number of amygdala
layer in Amygdala model, n is the number of CNN layers,
R
is a reward according to
Eq. (25) where
F
expresses fire rate of the neurons in amygdala layer,
θ
is a
R
threshold of the output of the amygdala layer,
is a threshold of the reward function
n
R
g
is a Sigmoid function. The output of Amygadala
E
controls state of
MCNN with threshold
Eq. (25),
(
⋅
)
θ
which is described in Eq. (27).
AMY
3 Simulations
To confirm the effectiveness of the proposed model of limbic system, we performed
two kinds of simulations using a personal computer loaded a Pentium 4 CPU. The fist