Information Technology Reference
In-Depth Information
As a consequence of ( 7.167 ) and ( 7.168 ) we see that
1
1
t | 2 μ | .
R
(
t
)
D
(
t
) =
(7.169)
t
The dynamical LRT yields the following theoretical prediction for the difference
response [ 8 ]:
ε
cos
(πμ/
2
+ ω
t
)
D
(
t
) =
,
(7.170)
2
μ
1
)
t
)
where
1 is the perturbation strength. This result corresponds to the survival proba-
bility of ( 7.148 ) and to the strict assumption that
ε<
is dichotomous. We have assessed
that the adoption of different forms of survival probability, either Lévy or Mittag-Leffler,
and the use of a not strictly dichotomous
ξ s (
t
)
, will generate different phases and dif-
ferent amplitudes, while leaving unchanged the structure of ( 7.140 ). The method of
experimental observation of liquid crystals that we adopt [ 62 ] does not afford informa-
tion accurate enough to quantitatively address this issue. For this reason we consider
both
ξ s (
t
)
φ
and C as fitting parameters.
7.4
The statistical habituation model (SHM)
We provide an intuitive interpretation of a statistical approach to habituation by incorpo-
rating neuronal statistics into the dynamical response. This generalization is guided by
the strategy Drew and Abbott [ 27 ] used to describe the closely related phenomenon
of adaptation. They contend that in modeling activity-dependent adaptation of neu-
ronal response it is impractical to describe in detail the full range of time constants
suggested by experiment. Each time constant introduces a new and distinct exponen-
tial and consequently multiple exponential processes are required to model adaptation.
However, these multiple exponential processes as an aggregate were determined to be
well described by a power law and the latter description has been used successfully to
describe adaptation in neural networks. A list of other biological phenomena that are
well modeled by inverse power-law dependences on time, from single channels up to
the level of human psychophysics, is given by Drew and Abbot [ 27 ], as well as by West
et al .[ 68 ]. Herein we do not make this phenomenological replacement, but show ana-
lytically how an average over a distribution of rates (time constants) gives rise to inverse
power laws in neuronal networks.
We take cognizance of the fact that multiple time scales contribute to the phenomenon
of habituation. However, rather than phenomenologically replacing the multiple expo-
nentials by an inverse power law to represent the full range of network dynamics, we
assume that the dynamics consist of multiple interacting channels, each with a statisti-
cally different time constant. The output is then determined by the aggregation of the
outputs of these multiple channels so as to provide an effective synaptic strength
output
= w eff (
t
) ×
input
.
(7.171)
Search WWH ::




Custom Search