Cryptography Reference
In-Depth Information
Proof of Claim 2.3.3.1:
Suppose, toward the contradiction, that the fraction of
good rows is greater than 66.8% (the argument for columns is analogous). Then,
to reach a contradiction, we construct an algorithm for inverting
f
as follows. On
input
y
, the algorithm repeats the following steps 10,000 times:
n
.
1.
Select
x
2
uniformly in
{
0
,
1
}
2.
Invoke
A
on input (
y
f
(
x
2
)), and obtain its output (
x
,
x
).
,
3.
If
f
(
x
)
y
, then halt with output
x
.
=
Clearly, this algorithm works in polynomial time, and it is left to analyze its
success in inverting
f
. For every good
x
1
, the probability that the algorithm
fails to invert
f
on input
y
=
f
(
x
1
) is at most (1
−
0
.
001)
10
,
000
<
0
.
001. Thus,
the probability that the algorithm succeeds in inverting
f
on input
f
(
U
n
)is
2
at
least
0
.
668
·
0
.
999
>
3
,
in
contradiction
to
the
hypothesis
that
f
is
1
3
-one-way.
2.3.3. Discussion
2.3.3.1. Reducibility Arguments: A Digest
Let us recall the structure of the proof of Theorem 2.3.2. Given a weak one-way function
f
, we first constructed a polynomial-time-computable function
g
. This was done with
the intention of later proving that
g
is strongly one-way. To prove that
g
is strongly one-
way, we used a
reducibility argument
. The argument transforms efficient algorithms
that supposedly contradict the strong one-wayness of
g
into efficient algorithms that
contradict the hypothesis that
f
is weakly one-way. Hence
g
must be strongly one-
way. We stress that our algorithmic transformation, which is in fact a randomized
Cook reduction,
5
makes no implicit or explicit assumptions about the structure of the
prospective algorithms for inverting
g
. Assumptions such as the “natural” assumption
that the inverter of
g
works independently on each block cannot be justified (at least
not at our current state of understanding of the nature of efficient computations).
We use the term
reducibility argument
, rather than just saying a reduction, so as to
emphasize that we do
not
refer here to standard (worst-case-complexity) reductions.
Let us clarify the distinction: In both cases we refer to
reducing
the task of solving one
problem to the task of solving another problem; that is, we use a procedure solving
the second task in order to construct a procedure that solves the first task. However, in
standard reductions one assumes that the second task has a perfect procedure solving
it on all instances (i.e., on the worst case) and constructs such a procedure for the first
task. Thus, the reduction may invoke the given procedure (for the second task) on very
“non-typical” instances. This cannot be done in our reducibility arguments. Here, we
are given a procedure that solves the second task
with certain probability with respect
to a certain distribution
. Thus, in employing a reducibility argument, we cannot invoke
this procedure on any instance. Instead, we must consider the probability distribution,
5
A (randomized) Cook reduction of one computational problem
1
to another problem, denoted
2
,isa
(probabilistic) polynomial-time oracle machine that solves
1
, while making queries to oracle
2
.
50