Cryptography Reference
In-Depth Information
definition of a real-valued random variable as a function defined on the sample
space:
Definition 2.4 Let
(Ω,
Pr
)
be a finite probability space. A random variable is a
function X
: Ω → R
.
Since the sample spaces we are using are finite, a random variable takes only a
finite number of values. Random variables can be used to define events related to
a real number x
∈ R
such as
{ ω Ω |
X
(ω)
x
}
,
{ ω Ω |
X
(ω) >
x
}
or
{ ω
Ω |
X
(ω) =
x
}
. The probabilities of these events are denoted simply by Pr
(
X
x
)
,
Pr
, respectively. The expected value (or expectation, or
mean) of a random variable is the average of its values weighted by their probability
of occurrence:
(
X
>
x
)
, and Pr
(
X
=
x
)
Definition 2.5 Let X be a random variable taking x 1 ,
x 2 ,...,
x n as values. The
) = i = 1 x i ·
expected value or mean of X is defined as: E
(
X
Pr
(
X
=
x i )
.
Exercise 2.2 Let X be the random variable defined by the sum of the numbers
appearing on two rolled fair dice. Compute the expected value of X .
Let us see now how the expected value of a random variable may be used to
estimate the number of trials in an experiment. For example, suppose that a ran-
domized algorithm (an algorithm that accepts random bits as input, see Sect. 2.3 ;
a typical example is an algorithm that searches for primes among randomly cho-
sen integers) produces some desired result with probability p . A natural question is
then: How many times must the algorithm be executed, on average, to obtain this
result?
The following proposition shows that the number of times the algorithm must be
run is 1
/
p :
Proposition 2.2
occurs with probability p. Then the
expected value of the number of trials until E occurs is 1
Suppose that an event E
Ω
/
p.
Proof Let E
.Let X be a random variable that
gives the average number of independent trials until E occurs and
Ω
be an event and p
=
Pr
(
E
)
μ =
E
(
X
)
.We
see that Pr
p so that, with probability p , E
occurs already on the first trial and with probability 1
(
X
=
1
) =
p and Pr
(
X
>
1
) =
1
p one needs the average value
μ
+ μ
of
trials after the first (unsuccessful) trial, i.e., a total of 1
trials. Therefore,
μ =
1
·
p
+ (
1
+ μ) · (
1
p
)
, which implies that
μ =
1
/
p .
Example 2.1 A Bernoulli experiment is a random experiment such that the outcome
of a single trial is either 1 (success) with probability p or 0 (failure) with probability
1
p . If a sequence of independent Bernoulli trials with success probability p is
performed, then we are in the situation described in Proposition 2.2. Although we
are not going to explain density and distribution functions here (see, for example,
[167] for more details), we mention that the geometric distribution estimates the
number of failures there will be before the first success is achieved in a sequence
of independent Bernoulli trials (more generally, the Pascal distribution estimates the
 
Search WWH ::




Custom Search