Information Technology Reference
In-Depth Information
at a party; a person soon forgets the initial chill experienced upon entering a room; and
the evil smell of the newly opened cheese fades from consciousness with the passage
of time. Humans and other animals become unresponsive to repeated stimulation and
this enables them to attend to new and, arguably, more important sources of informa-
tion. How information is transported in complex networks and how this is related to the
phenomenon of habituation will be discussed later.
1.1.2
Maximum entropy
In paraphrasing Gauss' argument in an earlier section we introduced the idea of max-
imizing a function, that is, determining where the variation of a function with respect
to an independent variable vanishes. Recall that the derivative of a function vanishes at
an extremum. This trick has a long pedigree in the physical sciences, and is not without
precedent in the social and life sciences as well. We use the vanishing of the variation
here to demonstrate another of the many methods that have been devised over the years
to derive the normal distribution, to gain insight into its meaning, and to realize the kind
of phenomena that it can be used to explain. The variational idea was most ambitiously
applied to entropy by Jaynes [ 17 ] as a basis for a formal derivation of thermodynamics.
Here we are much less ambitious and just use Jaynes' methods to emphasize the dis-
tinction between phenomena that can be described by normal statistics and those that
cannot.
In the previous subsection we mentioned the idea of entropy being a measure of
order, the more disordered the web the greater the entropy. This idea has been used
to maximize the entropy subject to experimental constraints. The way this is done is
by determining the least-biased probability density to describe the network that is con-
sistent with observations. This is an extension of Gauss' approach, where he used the
vanishing of the average value and the maximization to determine the normal distri-
bution. The parameters for the distribution were then fixed by normalization and the
variance.
Here we use the experimental observation of the second moment for a zero-centered
variable and define the constant given by ( 1.8 ) and the normalization of the probability
density to maximize
p
p
1
q 2 p
2
I
=−
(
q
)
ln p
(
q
)
dq
α
(
q
)
dq
β
(
q
)
dq
σ
.
(1.33)
To maximize the distribution function the variation in expression ( 1.33 ) must vanish,
δ
I
=
0
,
(1.34)
and the parameters
called Lagrange multipliers, are adjusted to satisfy the
constraints enclosed in their respective brackets. Taking the variation with respect to the
probability density yields
dq
α
and
β,
q 2
1
ln p
(
q
) α β
δ
p
(
q
) =
0
(1.35)
Search WWH ::




Custom Search