Cryptography Reference
In-Depth Information
The weak assumption lg q
m
2 (lg n)
O(1)
implies that each of these operations
in F
q
m
can be carried out using (lg in
O(1)
bit operations. The weaker assumption
lg q
m
2 n
O(1)
implies that each of these operations in F
q
m
can be carried out
using n
O(1)
bit operations.
Fast multiplication. Multiplying two d-coecient polynomials in F
q
m
[x]|
i.e., two polynomials of degree below d|costs d(lg d)
1+o(1)
. See, e.g., my online
survey paper [
8
, Section 4] for algorithmic details and credits.
Fast multiplication of many inputs. Computing a product of d linear poly-
nomials costs d(lg d)
2+o(1)
. See, e.g., [
8
, Section 12].
Fast evaluation. Computing Y (x
1
);Y (x
2
);:::;Y (x
d
), given x
1
;:::;x
d
2 F
q
m
and a d-coecient polynomial Y 2 F
q
m
[x], costs d(lg d)
2+o(1)
. See, e.g., [
8
,
Section 18].
Fast interpolation. For any distinct x
1
;:::;x
d
2 F
q
m
and any y
1
;:::;y
d
2 F
q
m
there is a unique polynomial Y 2 F
q
m
[x] of degree below d having Y (x
1
) = y
1
,
Y (x
2
) = y
2
, and so on through Y (x
d
) = y
d
. Computing this polynomial Y from
x
1
;:::;x
d
;y
1
;:::;y
d
costs d(lg d)
2+o(1)
. See, e.g., [
8
, Section 23].
Fast lattice-basis reduction. If an `` matrix over F
q
m
[x] has nonzero deter-
minant D then there is a nonzero linear combination Q of the matrix columns
such that deg Q (deg D)=`. Here deg Q means the maximum degree of the
entries of Q.
If each of the matrix entries is a d-coecient polynomial then computing such
a Q costs `
d(lg `d)
O(1)
by [
30
, Theorem 3.8]. Here is any positive real number
such that `` matrix multiplication costs O(`
). One can trivially take = 3,
but state-of-the-art matrix-multiplication techniques have pushed below 2:5.
There is an error in the proof of [
30
, Theorem 3.8]: the authors assume, with-
out justification, that they can quickly find x
0
2 F
q
m
such that D(x
0
) 6= 0.
Unfortunately, it is entirely possible thateveryx
0
2 F
q
m
will have D(x
0
) = 0;
in such cases, the algorithm stated in [
30
, Section 3] will fail. The simplest
workaround is to replace F
q
m
by an extension having significantly more than
deg D elements; extension degree (lg `d)
O(1)
always suces, leaving the cost
bound `
d(lg `d)
O(1)
unaffected. (Extension degree 2 suces for the matrix
shape used later in this paper, since D visibly splits into linear factors in F
q
m
[x].)
A closer look at the algorithm in [
30
] shows that the cost is d(lg d)
2+o(1)
if `
and the required extension degree are bounded by (lg d)
o(1)
. The same complexity
also appeared later in [
3
]. As ` increases, the algorithm in [
3
] scales as `
3+o(1)
rather than `
+o(1)
.
Fast root-finding. The traditional factorization method for a polynomial in
Q[y], introduced by Zassenhaus in [
55
] four decades ago, begins with a factor-
ization of the polynomial modulo a small prime number p, and then uses Newton
iteration (\Hensel's lemma") to lift the factorization to factorizations modulo p
2
,
p
4
, etc. A few Newton steps produce enough p-adic precision to determine the
factorization in Q[y]; see, e.g., [
29
, Theorem 15.20]. This procedure relies on a