Cryptography Reference
In-Depth Information
the worst-case asymptotics: for n = 1000007 and t = 67774 and w = 34482, one
can take k = 65438456875 and ` = 67775474425, while [ 36 , Lemma 7] chooses
k = 932238525625.
My main interest is in much smaller values of `. Choosing k as b(1 w=n)`c
guarantees 0 k < ` since w > 0, and guarantees (1 w=nk=`) 2 + k=` 2 <
1=` 2 + 1=`, so w;k;` are in the parameter space if (1 w=n) 2 (1 (t +
1)=n)(11=`) + 1=` 2 + 1=`; i.e., (1w=n) 2 1(t + 1)=n + (t + 1)=n` + 1=` 2 ;
i.e., (1w=n) 2 (1J=n) 2 (t+ 1)=n`+ 1=` 2 where J is the big-field Johnson
bound; i.e., J w ((t + 1)=` + n=` 2 )=(2 w=nJ=n). One can achieve this
with ` 2 (lg n) O(1) if J w is at least n=(lg n) O(1) .
There are limits to how far this idea can be pushed. For example, it is tempting
to take k;` as constants, so that cost factors such as ` 2 can be replaced by O(1).
The same replacement was used to justify, e.g., the statement \quadratic in the
blocklength n" in [ 4 , Abstract]. Apparently it is not instantly obvious that|at
least for small q, such as the case q = 2 highlighted in [ 4 ]|this replacement is
fundamentally flawed!
The diculty is the following. If q is constant, or more generally n o(1) , then
t 2 o(n), so J t=2 2 o(t). Choosing k;` 2 O(1) then forces w to be smaller
than bt=2c for all suciently large n: in other words, the algorithm cannot correct
more errors than Berlekamp's algorithm once n is suciently large. For the same
reason, the \quadratic" claim in [ 4 ] is content-free: it might be true that taking
constants k;` limits the algorithm in [ 4 ] to cost O(n 2 ), but then the algorithm
cannot correct more errors than a trivial combination of brute-force list decoding
for small n and Berlekamp's algorithm for large n, which also costs O(n 2 ).
Of course, this criticism does not apply to bounds that treat ;k;` as variables,
such as the bound O(n 2 = 5 ) in [ 4 , Corollary 5.7]. Furthermore, the \rational"
algorithms of [ 54 ] and [ 10 ] allow a better tradeo between k;`;w and can mean-
ingfully take k;` 2 O(1).
History. Hastad showed in 1988 that one could nd all small roots of a poly-
nomial modulo a large integer N by applying the famous LLL lattice-basis re-
duction algorithm. The same result was found independently by Vallee, Girault,
and Ton in 1989. See [ 37 ] and [ 53 ].
Coppersmith, in a famous 1996 paper, incorporated multiplicities into the
Vallee{Girault{Ton algorithm, drastically increasing the range of roots that
could be found. Coppersmith also showed that similar lattices could be used to
find not merely polynomial values that aremultiplesof N but also polynomial
values that aredivisorsof N. See [ 24 ] and [ 25 ].
The next year Howgrave-Graham in [ 40 ] introduced a critical simplification
in Coppersmith's algorithm. Coppersmith had identied the relevant lattice by
linear constraints; Howgrave-Graham directly wrote down generators for the
lattice. For example, for the problem of finding a divisor of N within X of
V , Howgrave-Graham chose parameters k;`, wrote down the lattice generated
by N k ;N k1 (Xy + V );:::; (Xy + V ) k ;:::; (Xy + V ) k (Xy) `k1 , found a short
vector Q in the lattice, and found small roots of Q. See [ 41 , page 101] (with
\p 0 " for V , \u" for k, \h" for `, \b 1 " for Q, \N" for N, and \X" for X).
Search WWH ::




Custom Search