Cryptography Reference
In-Depth Information
(In general, for any graph G =
( V , E f ),
defined analogously, preserves the expansion property of G . 17 ) The next combina-
torial step consists of showing that, for c and d as in the proposition, the ratio of
the two largest eigenvalues (in absolute value) of the adjacency matrix of each G n
is bounded away from 1. That is, for some
( V , E ), if
f : V V is 1-1, then G f =
ρ<
1 and all n , this eigenvalue ratio
for G n is at most
. (This is shown using the known relation between the expansion
constant of a regular graph and the eigenvalue ratio of its adjacency matrix; specifi-
cally,
ρ
c 2
(4 + c 2 ) · d .) The next observation is that in the graph G f , n =
ρ
1
(
{
0
,
1
}
n
,
P )
obtained from G f , n by letting P equal the set of
-edge-long paths in G f , n , the
ρ . By the hypothesis regarding
eigenvalue ratio is at most
and the bound on
ρ
,
ρ <
1
it follows that
2 . The main combinatorial step is captured by the following
lemma. 18
Lemma 2.6.5 (Random Walk Lemma): Let G be a regular graph having an
adjacency matrix for which the ratio of the absolute values of the first and second
eigenvalues is smaller than
1
of the graph's
vertices. Then a random walk of length t on G will hit S with probability at least
1 (1 0 . 5 · µ ) t .
2 . Let S be a subset of measure
µ
Proof Idea: Because it is of little relevance to the topic of this topic, we pro-
vide only a rough idea of what is involved in this proof. The proof refers to
the stochastic matrix obtained from the adjacency matrix of G by division with
G 's degree, and it views probability distributions over the graph's vertex set as
linear combinations of the (orthogonal) eigenvectors of this matrix. The ratio of
eigenvalues in the new matrix is as in the adjacency matrix of G . Furthermore,
the largest eigenvalue is 1, and the eigenvector associated with it is the uniform
distribution.
Going step-by-step along the random walk, we bound from above the proba-
bility mass assigned to random walks that do not pass through the set S . At each
step, the component of the current distribution that is in the direction of the first
eigenvector loses a factor
of its weight (where this loss is due to the fraction of
the paths that enter S in the current step). Using the bound on the second eigen-
value, it can be shown that in each step the L 2 -norm of the other components is
decreased by a factor of 2 (so that the residual distribution is “pushed” toward the
direction of the first eigenvector). Intuitively, the event passing through the set S
acts as a sieve on the residual distribution, but this sieve is effective only when
the residual distribution is close to uniform, which is being preserved by the next
random step on the expander.
µ
def
={ ( u ,v ):( f ( u ) ,v ) E } and
N ( S ) def
17 That
is,
we
let
E f
denote
={ v V : u S s.t. ( u ,v ) E }
N f ( S ) def
and
={ v V : u S s.t. ( u ,v ) E f } . Then
N f ( S ) ={ v V : f ( u ) f ( S ) s.t. ( f ( u ) ,v ) E }=
N ( f ( S )), where f ( S ) def
={
f ( u ): u
S
}
. Using the 1-1 property of f ,wehave
|
f ( S )
|=|
S
|
, and the claim follows
(i.e., if G has expansion factor c , then so does G f ).
18 Below, a random walk of length t means a sequence of t vertices generated as follows. First, a start vertex
is selected uniformly in the vertex set. For i = 2 ,..., t , the i th vertex is selected uniformly among the neighbors
of the i 1 vertex. We stress that if a vertex has a self-loop, then it is considered a neighbor of itself.
Search WWH ::




Custom Search