Cryptography Reference
In-Depth Information
Steps 1 and 2, which can be implemented in time n
·
2 l
·
O ( t G ( n ))
=
O (( n
) 2
·
t G ( n )) and n
)), respectively. 10
Finally, we define algorithm A . On input y
·
O ( l
·
2 l )
=
O (( n
) 2
·
log( n
=
f ( x ), the algorithm selects
with probability 2 2 j + 1 (and halts with no output otherwise). It
invokes the preceding implementation of algorithm A on input y with para-
meter
j
∈{
1
,..., }
def
=
ε
2 j 1
/
and returns whatever A does. The expected running time of
A is
O n 2
(2 j 1
/ ) 2
2 2 j + 1
2 j + 1
O ( n 2
3 )
·
·
( t G ( n )
+
log( n
·
))
=
·
·
t G ( n )
j = 1
(assuming t G ( n )
be an index satisfying Claim 2.5.4.1
(and letting S n be the corresponding set), we consider the case in which j (selected
by A ) is greater than or equal to i . By Claim 2.5.4.2, in such a case, and for
x S n , algorithm A inverts f on f ( x ) with probability at least
=
(
log n )). Letting i
1
2 . Using i
( = log 2 (1 ( n ))), we get
1
2
[ A ( f ( U n ))
Pr
=
U n ]
Pr
[ U n
S n ]
· Pr
[ j
i ]
·
1
2
2 i 1
2 2 i + 1
ε
( n )
·
·
2 = ε ( n ) 2
1
2 ·
ε
( n )
·
2
The proposition follows.
Comment. Using an additional trick, 11
one can save a factor of
( n ) in the running
log 3 (1
time, resulting in an expected running time of O ( n
·
G ( n )))
·
t G ( n ).
10 Using the special structure of matrix B , one can show that given a vector w , the product B w can be computed
in time O ( l · 2 l ). Hint: B (known as the Sylvester matrix) can be written recursively as
S k 1 S k 1
S k 1 S k 1
S k =
M means flipping the + 1 entries of M to 1 and vice versa. So
S k 1 S k 1
S k 1 S k 1
where S 0 =+ 1 and
w
w
S k 1 w + S k 1 w
S k 1 w S k 1 w
=
Thus, letting T ( k ) denote the time used in multiplying S k by a 2 k -dimensional vector, we have T ( k ) = 2 ·
T ( k 1) + O (2 k ), which solves to T ( k ) = O ( k 2 k ).
11 We further modify algorithm A by setting 2 l
2 )). Under the new setting,
with constant probability, we recover correctly a constant fraction of the bits of x (rather than all of them).
If x were a codeword under an asymptotically good error-correcting code (cf. [138]), this would suffice. To
avoid this assumption, we modify algorithm A so that it tries to recover certain XOR s of bits of x (rather than
individual bits of x ). Specifically, we use an asymptotically good linear code (i.e., having constant rate, correcting
a constant fraction of errors, and having efficient decoding algorithm). Thus, the modified A recovers correctly
a constant fraction of the bits in the encoding of x under such a code, and using the decoding algorithm it
recovers x .
2 ) (rather than 2 l
=
O (1
=
O ( n
Search WWH ::




Custom Search