Information Technology Reference
In-Depth Information
10.4.2 The Entropy-Based Proof of Lemma 4.3
The proof of Lemma 4.3 below is essentially the same as the one originally proposed
by Hoory, but is stated in the language of entropy.
Proof of Lemma 4.3 (a) Consider a Markov process
e 0 ,
e 1 ,
e 2 ,...,
e 2 i + 1 , where
e 0
is a uniformly chosen random edge from E LR , and for 0
j
<
2 i
+
1,
e j + 1 is
a random successor edge for
e j chosen uniformly from among the nonreturn-
ing possibilities. Let
v 0 ,v 1 ,v 2 ,...,v 2 i + 2 be the vertices visited by this non-
returning walk. We observe that for 0
j
i each
e 2 j and
e 2 j + 1 is respec-
E LR and E RL . Furthermore, for j even,
tively distributed uniformly in the set
Pr
[ v j = v ]=
d v / |
E
(
G
) |
for all
v
V L , and for j odd, Pr
[ v j = v ]=
d v / |
E
(
G
) |
for all
v
V R . Then,
log 2 E[
n 2 i + 1 (
e
) ]≥E[
log 2 n 2 i + 1 (
e
) ]
H
[∃
e 0
e 1 ...
e 2 i + 1 |∃
e 0 ]
i
i
=
H
[∃
e 2 j + 1 |∃
e 2 j ]+
H
[∃
e 2 j |∃
e 2 j 1 ]
j =
j =
0
1
i
i
=
0 E[
log 2 (
d v 2 j + 1
1
) ]+
1 E[
log 2 (
d v 2 j
1
) ]
j
=
j
=
(
+
)
log 2 (
d R
) +
i log 2 (
d L
)
i
1
1
1
i + 1
i
=
log 2 (
d R
1
)
(
d L
1
)
.
where to justify the first inequality we use Jensen's inequality for the concave
function log, to justify the second we use the fact that the entropy of a random
variable is at most the log of the size of its support, and to justify the last we
use Jensen's inequality for the convex function x log 2 (
x
1
)
( x
2). The claim
follows by exponentiating both sides.
(b) Similarly,
i
i
log 2 E[
n 2 i (
e
) ]≥
log 2 (
d L
1
)
(
d R
1
)
.
References
[AHL02] N. Alon, S. Hoory, N. Linial, The Moore bound for irregular graphs. Graphs Comb.
18 (1), 53-57 (2002)
[Big93] N. Biggs, Algebraic Graph Theory , 2nd edn. (Cambridge University Press, Cambridge,
1993)
[CT91] T.M. Cover, J.A. Thomas,
Elements
of
Information
Theory
(Wiley-Interscience,
New York, 1991)
Search WWH ::




Custom Search