Biomedical Engineering Reference
In-Depth Information
While this is necessarily a part of the modeling process for us,
it seems to have no physical significance; the system does not
need to represent its organization, it just has it.
8.3.2.
Complexity via Prediction
Forecast Complexity and Predictive Information . Motivated in part by
concerns such as these, Grassberger (187) suggested a new and highly satisfac-
tory approach to system complexity: complexity is the amount of information
required for optimal prediction. Let us first see why this idea is plausible, and
then see how it can be implemented in practice. (My argument does not follow
that of Grassberger particularly closely. Also, while I confine myself to time
series, for clarity, the argument generalizes to any kind of prediction (188).)
We have seen that there is a limit on the accuracy of any prediction of a
given system, set by the characteristics of the system itself (limited precision of
measurement, sensitive dependence on initial conditions, etc.). Suppose we had
a model that was maximally predictive, i.e., its predictions were at this limit of
accuracy. Prediction, as I said, is always a matter of mapping inputs to outputs;
here the inputs are the previous values of the time series. However, not all as-
pects of the entire past are relevant. In the extreme case of independent, identi-
cally distributed values, no aspects of the past are relevant. In the case of
periodic sequences with period p , one only needs to know which of the p phases
the sequence is in. If we ask how much information about the past is relevant in
these two cases, the answers are clearly 0 and log p , respectively. If one is deal-
ing with a Markov chain, only the present state is relevant, so the amount of
information needed for optimal prediction is just equal to the amount of infor-
mation needed to specify the current state. One thus has the nice feeling that
both highly random (IID) and highly ordered (low-period deterministic) se-
quences are of low complexity, and more interesting cases can get high scores.
More formally, any predictor f will translate the past of the sequence x - into
an effective state, s = f (x - ), and then make its prediction on the basis of s . (This
is true whether f is formally a state-space model or not.) The amount of informa-
tion required to specify the state is H [S]. We can take this to be the complexity
of f . Now, if we confine our attention to the set % of maximally predictive
models, we can define what Grassberger called the "true measure complexity" or
"forecast complexity" of the process as the minimal amount of information
needed for optimal prediction:
H f X
C
=
min
[
(
)]
.
[59]
f
%
Search WWH ::




Custom Search