Biomedical Engineering Reference
In-Depth Information
First, there is consistency which states that the sequence of maximum likelihood es-
timates converges in probability (or more strongly almost surely) to the true value as
the sample size increases. Second, the convergence in probability of the estimates
means that they are asymptotically unbiased. That is, the expected value of the es-
timate q is q as the sample size increases. For some models and some parameters,
unbiasedness is a finite sample property. The third property is invariance. That is,
if q is the maximum likelihood estimate of q,thent
q
(
)
is the maximum likelihood
estimate of t
. Finally, the maximum likelihood estimates are asymptotically ef-
ficient in that as the sample size increases, the variance of the maximum likelihood
estimate achieves the Cramer-Rao lower bound. This lower bound defines the small-
est variance that an unbiased or asymptotically unbiased estimate can achieve. Like
unbiasedness, efficiency for some models and some parameters is achieved in a finite
sample. Detailed discussions of these properties are given in [9, 31].
(
q
)
9.2.5
Model selection and model goodness-of-fit
In many data analyses it is necessary to compare a set of models for a given neural
spike train. For models fit by maximum likelihood, a well-known approach to model
selection is the Akaike's Information Criterion (AIC) [31]. The criterion is defined
as
q
AIC
=
2log L
(
|
N 0: T )+
2 q
,
(9.20)
where q is the dimension of the parameter vector q. The AIC measures the trade-off
between how well a given model fits the data and the number of model parameters
needed to achieve this fit. The fit of the model is measured by the value of
2 x the
maximized likelihood and the cost of the number of fitted parameters is measured
by 2 q . Under this formulation, i.e. considering the negative of the maximized like-
lihood, the model that describes the data best in terms of this trade-off will have the
smallest AIC.
Evaluating model goodness-of-fit, i.e., measuring quantitatively the agreement be-
tween a proposed model and a spike train data series, is a more challenging problem
than for models of continuous-valued processes. Standard distance discrepancy mea-
sures applied in continuous data analyses, such as the average sum of squared devi-
ations between recorded data values and estimated values from the model, cannot
be directly computed for point process data. Berman [4] and Ogata [28] developed
transformations that, under a given model, convert point processes like spike trains
into continuous measures in order to assess model goodness-of-fit. One of the trans-
formations is based on the time-rescaling theorem.
A form of the time-rescaling theorem is well known in elementary probability
theory. It states that any inhomogeneous Poisson process may be rescaled or trans-
formed into a homogeneous Poisson process with a unit rate [36]. The inverse trans-
formation is a standard method for simulating an inhomogeneous Poisson process
from a constant rate (homogeneous) Poisson process. Meyer [26] and Papangelou
[30] established the general time-rescaling theorem, which states that any point pro-
cess with an integrable rate function may be rescaled into a Poisson process with a
Search WWH ::




Custom Search