Cryptography Reference
In-Depth Information
Proof:
x Pr
E
( X ) =
[ X = x ] · x
x <v Pr
x v Pr
[ X
=
x ]
·
0
+
[ X
=
x ]
· v
= Pr
[ X
v
]
· v
The claim follows.
The Markov inequality is typically used in cases in which one knows very little about
the distribution of the random variable; it suffices to know its expectation and at least
one bound on the range of its values. See Exercise 1.
Using Markov's inequality, one gets a “possibly stronger” bound for the deviation
of a random variable from its expectation. This bound, called Chebyshev's inequal-
ity, is useful provided one has additional knowledge concerning the random variable
(specifically, a good upper bound on its variance). For a random variable X of finite
expectation, we denote by
( X ) def
Var
= E
[( X E
( X )) 2 ] the variance of X and observe
Var
( X ) = E
( X 2 ) E
( X ) 2 .
that
Chebyshev's Inequality: Let X be a random variable, and δ>
0 . Then
Var
( X )
Pr
E
[
|
X
( X )
|≥ δ
]
δ
2
def
=
( X E
( X )) 2
Proof: We define a random variable Y
and apply the Markov
inequality. We get
Pr
[ | X E
( X ) |≥ δ ] = Pr
[( X E
( X )) 2
2 ]
δ
E
[( X E
( X )) 2 ]
δ
2
and the claim follows.
Chebyshev's inequality is particularly useful for analysis of the error probability of
approximation via repeated sampling. It suffices to assume that the samples are picked
in a pairwise-independent manner.
Corollary (Pairwise-Independent Sampling): Let X 1 , X 2 ,..., X n be pairwise-
independent random variables with the same expectation, denoted µ , and the same
variance, denoted σ
2 . Then, for every ε> 0 ,
i = 1 X i
n
ε
2
σ
Pr
µ
ε
2 n
The X i 's are called pairwise-independent if for every i
=
j and all a and b , it holds
that
Pr
[ X i =
a
X j =
b ] equals
Pr
[ X i =
a ]
· Pr
[ X j =
b ].
Search WWH ::




Custom Search