Information Technology Reference
In-Depth Information
4 Expansion of Second Order Renyi Entropy as a Taylor Series
Let us assume that C is symmetric (an undirected graph) thus C = C T . If one considers
the expansion of a vector of probabilities from state
λ
=0, |x(0)>, to another vector at a
)> by the continuous Markov transformation M = e λ C then |x(
later state
λ
, |x(
λ
λ
)> =
e λ C |x(0)> and thus the entropy is given by:
)>)= log 2 (n<x(0)|( e λ C ) T (e λ C )|x(0)>)
S = log 2 (n
Σ
x i 2 ) = log 2 (n<x(
λ
)|x(
λ
or rearranging and defining R we get:
) = 2 S /n = <x(0)| e 2λC |x(0)>) since C = C T
R(
λ
and then expanding the exponential we get:
R(
λ
) = <x(0)| ( I + 2
λ
C + (2
λ
C)/2! + … ) |x(0)>
Thus this power of the second order Renyi entropy consists of two times the
diagonal values of the powers of the connection matrix, plus the unit matrix as shown.
From this one can see that as
becomes larger and larger, one must take more and
more of the topology connections into consideration. This in fact gives a hierarchical
expansion of this entropy that gradually 'explores and includes' higher and higher
order connectivity. If the row and column entropies are computed to include these
higher orders, then they will begin to take into account more complex aspects of the
networks interconnectedness. When there is asymmetry a similar equation can be
obtained.
λ
5 General Diagonal Values and Eigenvalues
The previous results can be generalized to include totally general diagonal values for
C, by utilizing the diagonal transformations available in the n-parameter Abelian
scaling group. This group simply multiplies any node value by a scaling factor via M=
e λ C . There is a natural interpretation to the actions of this group in terms of network
probability flows as introducing a source or sink of probability at the node which is
acted upon. That action removes the conservation of probability that was maintained
by the Markov monoid, but since such flow was simply used to encapsulate the
topological structure of the network, we can accept this lack of conservation. Thus
one can add to any diagonal of C, any positive or negative value representing the
scaling value of that coordinate and one will still have a valid network as all off
diagonal values of C are unchanged and the M matrix will still give the indicated
flows. This allows one to see the previous arbitrary allocations of '1' or '0' of the C
diagonals in a new light, especially for the eigenvalue computations.
When C is diagonalized, with the values leading to the Markov transformations, or
to the more general values of the diagonals of the last paragraph, one automatically
gets a diagonalization of the M matrix. The interpretation of the eigenvectors is now
totally obvious as those linear combinations of nodal flows that give a single
eigenvalue (decrease when the transformation is Markov) of the associated
probability, for that eigenvector. This follows from the fact that all Markov
eigenvalues are less than one except the one value for equilibrium which has
Search WWH ::




Custom Search