Information Technology Reference
In-Depth Information
understanding of phenomena, before the multitude of ways inverse power-law behavior
can be mathematically modeled is appreciated.
1.4
Recap
In this first chapter we argued that normal statistics do not describe complex webs. Phe-
nomena described by normal statistics can be discussed in terms of averages and in a
relative sense are simple, not complex. An example of a simple process is manufac-
turing. The distribution of errors in the manufacture of products must be kept within
narrow limits or the manufacturer will soon be out of business. Thus, the industrial rev-
olution was poised to embrace the world according to Gauss and the mechanistic society
of the last two centuries flourished. But as the connectivity of the various webs within
society became more complex the normal distribution receded further and further into
the background until it completely disappeared from the data, if not from our attempted
understanding.
The world of Gauss is attractive, in part, because it leads to consensus. A process
that has typical outcomes, real or imagined, enables people to focus attention on one
or a few quantities and through discussion decide what is important about the process.
Physicians do this in deciding whether or not we are healthy. They measure our heart
rate and blood pressure, and perhaps observe us walking; all to determine whether these
quantities fall within the “normal” range. The natural variabilities of these physiologic
quantities have in the past only caused confusion and so their variation was ignored, by
and large. A similar situation occurs when we try to get a salary increase and the boss
brings out the charts to show that we already have the average salary for our position
and maybe even a little more.
The normal distribution is based on one view of knowledge, namely a view embracing
the notion that there is a best outcome from an experiment, an outcome that is pre-
dictable and given by the average value. Natural uncertainty is explained by means of
the maximum-entropy argument, where the average provides the best representation of
the data and the variation is produced by the environment having the maximum entropy
consistent with the average. But this interesting justification for normalcy turned out to
be irrelevant, because normal statistics do not appear in any interesting data (complex
webs), not even in the distribution of grades in school. We did look at the distributions
from some typical complex phenomena: the distribution in the number of scientific
papers published, the frequency of words used in languages, the populations of cities
and the metabolic rates of mammals and birds. The distributions for these complex webs
were all inverse power laws, which from a maximum-entropy argument showed that the
relevant property was the scale-free nature of the dynamic variable.
The replacement of the normal distribution with the hyperbolic distribution implies
that it is the extremes of the data, rather than the central tendency, which dominate
complex webs. Consequently, our focus was shifted from the average value and the
standard deviation to the variability of the process being investigated, a variability
whereby the standard deviation diverges. Of particular significance was the difference
Search WWH ::




Custom Search