Biomedical Engineering Reference
In-Depth Information
in the interval [0 , 1] there are uncountably many Markov chains with memory depth
1 matching the constraint. One could also likewise consider memory depth D =2 , 3
and so on.
Since transition probabilities reflect assumptions on the underlying (causal)
mechanisms taking place in the observed neural network, the choice of the statistical
model defined by those transition probabilities is not anecdotal. In the example
above, that can be easily generalized, one model considers that spikes are emitted
like a coin tossing, without memory, while other models involve a causal structure
with a memory of the past. Even worse, there are infinitely many choices for
μ ap since (1) the memory depth can be arbitrary; (2) for a given memory depth
there are (infinitely) many Markov chains whose Gibbs distribution matches the
constraints ( 8.13 ). Is there a way to selecting, in fine , only one model from
constraints ( 8.13 ), by adding some additional requirement? The answer is “yes”.
8.3.2.6
Entropy
The
entropy
rate
or
Kolmogorov-Sinai
entropy
of
a
stationary
probability
distribution μ is:
n
ω 1
μ [ ω 1 ]log μ [ ω 1 ] ,
h [ μ ]=
lim
n
(8.14)
→∞
where the sum holds over all possible blocks ω 1 . This definition holds for systems
with finite or infinite memory. In the case of a Markov chain with memory depth
D> 0,wehave[ 12 ]
μ ω 1 P ω ( D +1) ω 1 log P ω ( D +1) ω 1 , (8.15)
h [ μ ]=
ω D +1
1
Note that, from time-translation invariance the block ω 1
can be replaced by
ω D + n− 1
n
, for any integer n .
When D =0, the entropy reduces to the classical definition:
h [ μ ]=
μ [ ω (0) ] log μ [ ω (0) ] .
(8.16)
ω (0)
8.3.2.7
Gibbs Distributions in the Stationary Case
In the stationary case Gibbs distributions obey the following variational principle
[ 10 , 28 , 57 ]:
P ( φ )= sup
ν∈M inv ( h [ ν ]+ ν [ φ )])= h [ μ ]+ μ [ φ ] ,
(8.17)
Search WWH ::




Custom Search