Biomedical Engineering Reference
In-Depth Information
that it has reached sort of an “adult” age where its structure is essentially fixed. Note
that this does not exclude adaptation processes, e.g., if the transition probabilities
depend explicitly on time. Mathematically, the limit n
corresponds to
studying the asymptotic of the Markov chain and related questions are: Is there a
m
+
limit probability P ω n−D ? Does it depend on the initial condition P ω m + D− 1
,
m
m
?
Let us first consider the easiest case where transition probabilities are invariant
under time translation. This means that for each possible spiking pattern α
→−∞
∈A
,for
n , P ω ( n )= α ω n− 1
−D =
all possible “memory” blocks α 1
n−D = α 1
D and
−D ∈A
P ω (0) = α ω 1
D . We call this property stationarity referring rather
to the physics literature than to the Markov chains literature (where this property
is called homogeneity ). If, additionally, all transition probabilities are strictly
(positivity) then there is a unique probability μ , called the asymptotic probability
D = α 1
of the chain , such that, whatever the initial choice of a probability P ω m + D− m
in ( 8.4 ) the probability of a block ω n−D converges to μ ω n−D as m tends
to
−∞
. One says that the chain is ergodic (Note that positivity of all transition
probabilities is a sufficient but not necessary condition for ergodicity [ 64 ]). In
this sense, dynamics somewhat “selects” the probability μ , since, whatever the
initial condition P ω m + D− m , it provides the statistics of spikes observed after
a sufficiently long time. Additionally, μ has the following property: for any time
n 1 ,n 2 ,n 2
n 1
D ,
n 2
P ω ( l ) ω l− 1
μ ω n 2
n 1 =
l−D μ ω n 1 + D− 1
.
(8.5)
n 1
l = n 1 + D
Let us return to the problem of choosing the initial probability in Eq. ( 8.4 ). If one
wants to determine the evolution of the Markov chain after a initial observation time
n 1 one has to fix the initial probability P ω n 1 + D− 1
and to use ( 8.4 )(where m
is replaced by n 1 ) and there is an indeterminacy in the choice of P ω n 1 + D− 1
n 1 .
This indeterminacy is released, though, if the system has started to exist in the
n 1
infinite past. Then, P ω n 1 + D− 1
n 1
has to be replaced by μ ω n 1 + D− 1
n 1
and Eq. ( 8.4 )
becomes:
( n 2 −D,n 2 )
n 2
μ ω n 2
n 2 −D =
P ω ( l ) ω l− 1
l−D μ ω n 1 + D− 1
.
(8.6)
n 1
n 1 ,n 2
l = n 1 + D
In this way, taking the limit m
→−∞
for an ergodic Markov chain, resolves the
indeterminacy in the initial condition.
Positivity and stationary assumptions may not hold. If positivity is violated then
several situations can arise: several asymptotic probability distributions,can exist,
depending on the choice of the initial probability P ω m + D− m ; it can also be that
no asymptotic probability exist at all. If stationarity does not hold, as it is the case
e.g., for a neural network with a time-dependent stimulus, then one can nevertheless
define a probability μ selected by dynamics. In short, this is a probability μ on the set
Search WWH ::




Custom Search