Biomedical Engineering Reference
In-Depth Information
8.3.2.5
Matching Experimental Averages
Assume that an experimentalist observes
rasters, and assume that all those rasters
are distributed according to an hidden probability distribution
μ
. Is to possible to
determine or, at least, to approach
μ
from those rasters? One possibility relies on
the maximal entropy principle described in the next sections. We assume for the
moment that statistics is stationary.
Fix
N
K
observables
O
k
,
k
=1
,...,
K
, and compute their empirical average
π
(
T
)
[
O
k
]. The remarks of the previous sections hold: since all rasters are dis-
tributed according to
μ
,
π
(
T
)
ω
[
O
k
] is a random variable with mean
μ
[
O
k
] and
ω
Gaussian
1
1
N >
1 rasters
the experimentalist can estimate the order of magnitude of those fluctuations and
also analyze the raster-length dependence.
In fine
, he obtains an empirical average
value for each observable,
π
(
T
)
fluctuations about its mean, of order
√
T
.Ifthereare
. Now, to estimate the
hidden probability
μ
, by some approximated probability
μ
ap
,hehastoassume,asa
minimal requirement, that:
[
O
k
]=
C
k
,k
=1
,...,
K
ω
π
(
T
)
ω
[
O
k
]=
C
k
=
μ
ap
[
O
k
]
, k
=1
,...,
K
,
(8.13)
i.e., the expected average of each observable, computed with respect to
μ
ap
is equal
to the average found in the experiment. This fixes a set of
constraints
to approach
μ
. We call
μ
ap
a
statistical model
.
Unfortunately, this set of conditions does not fix a unique solution but infinitely
many! As an example if we have only one neuron whose firing rate is
1
2
,
then a straightforward choice for
μ
ap
is the probability where successive spikes
are independent (
P
[
ω
k
(
n
)
ω
k
(
n
−
1)] =
P
[
ω
k
(
n
)]
P
[
ω
k
(
n
−
1)])andwhere
1
the probability of a spike is
2
. However, one can also take a one-step mem-
ory model where transition probabilities obey
P
[
ω
k
(
n
)=0
|
ω
k
(
n
−
1) = 0] =
P
[
ω
k
(
n
)=1
|
ω
k
(
n
−
1) = 1]
=
p
,
P
[
ω
k
(
n
)=0
|
ω
k
(
n
−
1) = 1]
=
P
[
ω
k
(
n
)=1
|
∈
[0
,
1]. In this case, indeed the invariant
probability of the corresponding Markov chain is
μ
ap
[
ω
k
(
n
)=0
,
1] =
ω
k
(
n
−
1) = 0] = 1
−
p
,
p
1
2
,since
from Eq. (
8.5
),
μ
ap
[
ω
k
(
n
)=0]=
P
[
ω
k
(
n
)=0
|
ω
k
(
n
−
1)]
μ
ap
[
ω
k
(
n
−
1)]
,
ω
k
(
n−
1)=0
,
1
p
2
+
1
−
p
=
1
2
.
2
The same holds for
μ
ap
[
ω
k
(
n
)=1]. In this case, we match the constraint too but
with a model where successive spikes are
not
independent. Now, since
p
takes values
1
Fluctuations are not necessarily Gaussian, if the system undergoes a second order phase transition
where the topological pressure introduced in Sect.
8.3.1.5
is not twice differentiable.
Search WWH ::
Custom Search