Cryptography Reference
In-Depth Information
1
where
G
's error is exactly
4
), the foregoing procedure is unlikely to invert
f
. Note
that the
average
error probability of
G
(which is averaged over all possible inputs of
the form (
f
(
x
)
,
·
)) cannot be decreased by repeating
G
several times (e.g.,
G
may
always answer correctly on
3
4
1
4
). What
is required is an
alternative way of using
the algorithm
G
, a way that does not double
the original error probability of
G
. The key idea is to generate the
r
's in a way that
requires applying algorithm
G
only once per each
r
(and
i
), instead of twice. Specifi-
cally, we shall use algorithm
G
to obtain a “guess” for
b
(
x
,
r
⊕
e
i
) and obtain
b
(
x
,
r
)
in a different way. The good news is that the error probability is no longer doubled,
since we use
G
only to get a “guess” of
b
(
x
of the inputs and always err on the remaining
,
r
⊕
e
i
). The bad news is that we still
need to know
b
(
x
,
r
), and it is not clear how we can know
b
(
x
,
r
) without applying
G
. The answer is that we can guess
b
(
x
,
r
) by ourselves. This is fine if we need to
guess
b
(
x
many
r
's), but the problem is
that we need to know (and hence guess) the values of
b
(
x
,
r
) for only one
r
(or logarithmically in
|
x
|
,
r
) for polynomially many
r
's. An obvious way of guessing these
b
(
x
r
)'s yields an exponentially vanishing
success probability. Instead, we generate these polynomially many
r
's such that, on
one hand, they are “sufficiently random,” whereas, on the other hand, we can guess
all the
b
(
x
,
r
)'s with noticeable success probability. Specifically, generating the
r
's
in a particular
pairwise-independent
manner will satisfy both (seemingly contradic-
tory) requirements. We stress that in case we are successful (in our guesses for all the
b
(
x
,
r
)'s), we can retrieve
x
with high probability. Hence, we retrieve
x
with noticeable
probability.
A word about the way in which the pairwise-independent
r
's are generated (and
the corresponding
b
(
x
,
r
)'s are guessed) is indeed in order. To generate
m
=
poly(
n
)
many
r
's, we uniformly (and independently) select
l
,
def
=
log
2
(
m
+
1) strings in
{
0
,
1
}
n
.
Let us denote these strings by
s
1
,...,
s
l
. We then guess
b
(
x
,
s
1
) through
b
(
x
,
s
l
). Let
us denote these guesses, which are uniformly (and independently) chosen in
{
0
,
1
}
,
by
σ
1
through
σ
l
. Hence, the probability that all our guesses for the
b
(
x
,
s
i
)'s are
1
correct is 2
−
l
=
poly(
n
)
. The different
r
's correspond to the different
non-empty
subsets
. Specifically, we let
r
J
def
=⊕
j
∈
J
s
j
. The reader can easily verify that the
r
J
's are pairwise independent, and each is uniformly distributed in
of
{
1
,
2
,...,
l
}
{
0
,
1
}
n
. The key
observation is that
r
J
)
,
⊕
j
∈
J
s
j
)
s
j
)
b
(
x
,
=
b
(
x
=⊕
j
∈
J
b
(
x
,
Hence, our guess for the
b
(
x
,
r
J
)'s is
⊕
j
∈
J
σ
j
, and with noticeable probability all our
guesses are correct.
2.5.2.3. Back to the Actual Proof
Following is a formal description of the inverting algorithm, denoted
A
. We assume, for
simplicity, that
f
is length-preserving (yet this assumption is not essential). On input
y
(supposedly in the range of
f
), algorithm
A
sets
n
def
=|
y
|
def
=
and
l
log
2
(2
n
·
p
(
n
)
2
+
1
p
(
n
)
1)
, where
p
(
·
) is the polynomial guaranteed earlier (i.e.,
ε
(
n
)
>
for the infinitely
many
n
's in
N
). Algorithm
A
proceeds as follows: