Information Technology Reference
In-Depth Information
Attraction for Stochastic Cellular Neural Networks
Li Wan 1 and Qinghua Zhou 2
1 Department of Mathematics and Physics, Wuhan Textile University, Wuhan 430073, China
2 Zhaoqing University, Zhaoqing 526061, China
Abstract. The aim of this paper is to establish new results and sufficient criteria
on weak attractor for stochastic cellular neural networks with delays. By using
Lyapunov method and Lasalle-type theorem, sufficient conditions ensuring the
weak attractor for stochastic cellular neural networks are established. The al-
most surely asymptotic stability is a special case of our results. Our criteria are
easily tested by Matlab LMI Toolbox. An example is given to demonstrate our
results.
Keywords: Stochastic cellular neural networks, Weak attractor, Delays.
1 Introduction
Cellular neural networks have been extensively studied in past years and found many
applications for solving a number of problems in various scientific disciplines. Such
applications heavily depend on the dynamical behaviors. In recent years, considerable
attention has been paid to investigate the dynamics of stochastic neural networks
since stochastic disturbances are mostly inevitable owing to thermal noise in elec-
tronic implementations. Some results on stochastic neural networks with delays have
been reported in [1-9] and references therein. However, these literatures only consider
the stability of stochastic neural networks with delays. In fact, except for stability
property, dynamical behaviors include uniform boundedness, attractor, bifurcation
and chaos. To the best of our knowledge, so far there are few results on the attractor
for stochastic cellular neural networks with delays.
Motivated by the above discussions, the objective of this paper is to establish new
results and sufficient criteria on the weak attractor for the following stochastic cellular
neural networks with delays:
dx
(
t
)
=
[
Cx
(
t
)
+
Af
(
x
(
t
))
+
Bf
(
y
(
t
))]
dt
+
[
Px
(
t
)
+
Qy
(
t
)]
dw
(
t
)
(1)
n
x
(
t
)
=
φ
(
t
),
τ
t
0
x
(
t
)
R
with the initial condition
where
is the state
T
diaC " represents
the rate with which the ith unit will reset its potential to the resting state in isolation
when being disconnected from the network and the external stochastic perturbation;
T
y
(
t
)
=
(
x
(
t
τ
),
"
,
x
n t
(
τ
))
=
( 1
c
,
c
)
>
0
vector,
;
1
1
n
n
f
(
x
(
t
))
=
[
f
(
x
(
t
)),
"
,
f
(
x
(
t
))]
represents the neuron activation function with
1
1
n
n
 
Search WWH ::




Custom Search