Information Technology Reference
In-Depth Information
Proof of Theorem 3.3 First, consider graphs with odd girth. From Observation 3.2,
Lemma 3.4 (a) and linearity of expectation we obtain
r
1
+ d
0 ( d
i
n
≥ E[
n 0 (v) +
n 1 (v) +···+
n r (v) ]≥
1
1
)
,
i
=
where
is chosen with distribution π (defined in Lemma 3.4 (a)).
Now, consider graphs with even girth. Let
v
V
(
G
)
e 1 be chosen uniformly at random from
E and let
e 2
is also uniformly distributed in E . Then, from Observation 3.2, Lemma 3.4 (b) and
linearity of expectation we obtain
e 2 be its companion edge (going in the opposite direction). Note that
r
r
1
0 ( d
i
n
≥ E
0 [
n i (
e 1 ) +
n i (
e 2 ) ]
2
1
)
.
i
=
i
=
10.3.3 The Entropy-Based Proof of Lemma 3.4
The proof of Lemma 3.4 below is essentially the same as the one originally proposed
by Alon, Hoory, and Linial but is stated more naturally using the language of entropy.
Proof of Lemma 3.4 (a) Consider the Markov process
v
,
e 1 ,
e 2 ,…,
e i , where
v
is
G
a random vertex of G chosen with distribution π ,
e 1 is a random edge of
leaving
v
(chosen uniformly from the d v
choices), and for 1
j
<
i ,
e j + 1 is a
random successor edge for
e j chosen uniformly from among the nonreturning
possibilities. (If
e j has the form
(
x
,
y
)
, then there are d y
1 possibilities for
e j + 1 ).
Let
v 0 = v, v 1 ,v 2 ,...,v i be the vertices visited by this non-returning walk. We
observe that each
( G
e j is distributed uniformly in the set E
)
and each
v j has
distribution π . Then,
log 2 E[ n i (v) ]≥E[
log 2 n i (v) ]
H [ e 1 e 2 ... e i | v ]
i 1
= H [ e 1 | v ]+
H [ e j + 1 | e 1 e 2 ... e j v ]
j
=
1
i
1
= E[ log 2 d
v ]+
1 E[ log 2 ( d
1 ) ]
v
j
j
=
i
1
= E[ log 2 d v ( d v 1 )
]
dn
v
1
i
1
=
d v
log 2 d v ( d v
1
)
log 2 d ( d
i
1
1
)
,
Search WWH ::




Custom Search