Cryptography Reference
In-Depth Information
1
which, by the contradiction hypothesis, is greater than
p ( n ) for infinitely many
n 's. So it follows that D (which is a probabilistic polynomial-time algorithm)
distinguishes polynomially many samples of G ( U n ) from polynomially many
samples of U 2 n . Using Theorem 3.2.6, we derive a contradiction to the hypothesis
(of the current theorem) that G is a pseudorandom generator, and the current
theorem follows.
n
·
3.6.3. Applications: A General Methodology
Sharing a pseudorandom function allows parties to determine random-looking values
depending on their current views of the environment (which need not be known a priori).
To appreciate the potential of this tool, one should realize that sharing a pseudorandom
function is essentially as good as being able to agree, on the fly, on the association of
random values to (on-line) given values, where the latter are taken from a huge set of
possible values. We stress that this agreement is achieved without communication and
synchronization: Whenever some party needs to associate a random value to a given
value v ∈{ 0 , 1 }
n .
As an illustrative example, consider the problem of identifying friend or foe ,in
which members of a club sharing some secret wish to be able to identify one another
as belonging to the club. A possible solution is for the club members to share a secret
function, defined over a huge domain, and prove their membership in the club by
answering a random challenge presented to them, with the value of the secret function
evaluated at the challenge. We claim that using a pseudorandom function in the role
of the secret function guarantees that it will be infeasible for an adversary to pass as a
member, even after conducting polynomially many interactions with members in which
the adversary may ask them to reply to challenges of its choice. To prove this claim,
consider what happens when the secret function is a truly random one. (We stress that
this is merely a mental experiment, since it is infeasible to share such a huge random
object.) In such a case, the random function's values at new points (corresponding
to new challenges that the adversary should answer) are uncorrelated to its values at
any other points (corresponding to answers the adversary has obtained by challenging
legitimate members). Thus, the adversary will fail in such an imaginary situation. It
follows that the adversary must also fail in the actual situation (in which the secret
function is selected from a pseudorandom ensemble), or else we derive a distinguisher
of pseudorandom functions from truly random ones.
In general, the following two-step methodology is useful in many cases:
n , it will associate to v the same random value r v ∈{ 0 , 1 }
1. Design your scheme assuming that all legitimate users share a random function, f :
{
n . (The adversaries may be able to obtain, from the legitimate users, the
values of f on arguments of their choice, but will not have direct access to f itself. 6 )
This step culminates in proving the security of the scheme, assuming that f is indeed
uniformly chosen among all possible such functions, while ignoring the question of how
such an f can be selected and handled.
n
0
,
1
}
→{
0
,
1
}
6 This is different from the Random Oracle Model , where the adversary has direct access to a random oracle
(that is later “implemented” by a function, the description of which is also given to the adversary).
Search WWH ::




Custom Search