Information Technology Reference
In-Depth Information
rate and blood pressure and perhaps observe us walking; all to determine whether these
measurements fall within the “normal” range. The natural variability of these physio-
logic quantities has in the past only caused confusion and so their variation is ignored. A
similar situation occurs when we try to get a salary increase and the boss brings out the
charts to show that we already have the average salary for our position and maybe even
a little more. Stephen Jay Gould had a gift for language and in his topic Full House [ 4 ]
he put it this way:
... our culture encodes a strong bias either to neglect or ignore variation. We tend to focus instead
on measures of central tendency, and as a result we make some terrible mistakes, often with
considerable practical importance.
The normal distribution is based on one view of knowledge, a view embracing the
notion that there is a best outcome from an experiment, namely an outcome that is pre-
dictable and given by the average value. The natural uncertainty was explained by means
of the maximum-entropy argument, where the average provides the best representa-
tion of the data and the variation is produced by the environment having the maximum
entropy consistent with the average. But this interesting justification for normalcy turned
out to be irrelevant, because normal statistics did not appear in any interesting data,
not even in the distribution of grades in school. We did look at the distributions from
some typical complex phenomena: the distribution in the number of scientific papers
published, the frequency of words used in languages, the populations of cities and
the metabolic rates of mammals and birds. The distributions for these complex webs
were all described by an inverse power law, which from a maximum-entropy argument
showed that the relevant property was the scale-free nature of the dynamical variable.
The center-stage replacement of the normal distribution with the hyperbolic distri-
bution implies that it is the extremes of the data which dominate complex webs, not
the central tendency. Consequently, our focus was shifted from the average value and
the standard deviation to the variability of the process being investigated, a variability
such that the standard deviation diverges. Of particular significance was the difference
in the predictability of the properties of such networks, in particular their failure char-
acteristics. No longer does the survival probability of an element or a network have
the relatively rapid decline of an exponential, but instead failure has a heavy tail that
extends far from the central region. Some examples were given to show the conse-
quences of this heavy tail: the durations of power outages have this property, as do the
sizes of forest fires and many other phenomena that threaten society. The why of this
observation comes later.
If Chapter 1 is the embarkation point from the world of Gauss to the world of Pareto
then Chapter 2 is the introduction to the ubiquity of the inverse power-law distribution
in the physical, social and life sciences together with its interpretation in terms of frac-
tals according to Mandelbrot [ 5 ]. The explanation of scaling in the observables describ-
ing these complex webs spanned the centuries from the geophysics and physiology of da
Vinci to the economics of Montroll and Shlesinger [ 6 ] and the bronchial lungs of West
et al. [ 8 ]. We also found that there are two different kinds of distributions that scale:
those that follow the law of Zipf, where the relative frequency is an inverse power law
Search WWH ::




Custom Search