Biomedical Engineering Reference
In-Depth Information
8.4.4.4
From Non-Markovian to Markovian Potentials
Since the dependence on the past decays exponentially fast, thanks to the exponen-
tial decay of synaptic response, it is possible to provide Markovian approximations
of the potential ( 8.41 ). Basically, one truncates the synaptic response after a
characteristic time D and performs a series expansion of the function ( 8.44 ), using
the fact that the power of a monomial is the same monomial. So, the series becomes
a polynomial, which provides a Markovian potential of the form ( 8.29 ). Here, the
coefficients β i 1 ,n 1 ,...,i l ,n l ( n )'s depend explicitly on the synaptic weights (network
structure) as well as on the stimulus. Now, for N neurons and a memory depth
D , the truncated potential contains 2
ND coefficients β i 1 ,n 1 ,...,i l ,n l ( n ), while the
exact ( D
) potential depends only on a polynomial number of parameters.
This shows that, in this model, a potential of the form ( 8.29 ) induces a strong, and
somehow pathological, redundancy.
Additionally, the truncated potential is far from Ising, or more elaborated
potentials used in experimental spike train analysis . As we have seen most of
these models are memory-less and non causal. Now, the best approximation of
the potential ( 8.41 ) by a memory-less potential is ... Bernoulli. This is because of
the specific form of φ :aterm ω k (0) multiplied by a function of ω 1
→−∞
.Tohave
a memory-less potential one has to replace this function by a constant, giving
therefore a Bernoulli potential. So, Ising model as well as memory less models
are rather poor in describing the statistics of model ( 8.34 ). But, then, how can we
explain the success of Ising model to analyze retina data? We return to this point in
the conclusion section.
−∞
8.4.4.5
Are Neurons Independent?
For this model we can answer the question of neurons independence. The poten-
tial ( 8.41 ) is a sum over neurons, similarly to ( 8.33 ), but it has not the same form
as ( 8.33 ). The difference is subtle but essential. While in ( 8.33 ) the potential of
neuron k , φ k depends upon the past via the spike-history ω k of neuron k only ,
in ( 8.41 ) φ k depends upon the past via the spike-history ω of the entire network .
The factorization in ( 8.41 ) reflects only a conditional independence: if the past spike
history of the network is fixed then the only source of randomness is the noise, which
is statistically independent by hypothesis, for all neurons. So, there is nothing deep
in the factorization ( 8.41 ). On the opposite, a factorization like ( 8.33 ) would reflect a
deeper property. Neurons would somewhat be able to produce responses which are
well approximated by a function of their own history only, although each neuron
receives inputs from many other neurons. Considering the form of the potential φ ,
givenby( 8.42 ) there is no chance to obtain the strong factorization property ( 8.33 )
unless neurons are disconnected. This property could however arise if the model
obeys a mean-field theory as the number of neurons tends to infinity. This requires,
in general, strong constraints on the synaptic weights (vanishing correlations), not
necessarily realistic.
Search WWH ::




Custom Search