Cryptography Reference
In-Depth Information
as eectively as the Guruswami{Sudan algorithm. I had not actually read the
Guruswami{Sudan paper at that point, and I did not realize that Guruswami and
Sudan were missing the Howgrave-Graham simplification. I also had no idea that
Koetter and Vardy had quantitatively improved the Guruswami{Sudan results,
moving from the big-field Johnson bound to the F
q
Johnson bound; I learned this
much later when Augot kindly sent me a copy of [
4
]. I do not see any way to use
the algorithm stated in [
9
] to obtain the Koetter{Vardy results: an extra tweak
is required, and is the main content of Section 4 of this paper. The advantages
of this tweaked algorithm over the Koetter{Vardy algorithm are analogous to
the advantages of the Howgrave-Graham algorithm over the Guruswami{Sudan
algorithm: most importantly, the local specification of the lattice is eliminated
in favor of directly writing down lattice generators starting from V .
Cohn and Heninger in [
23
] presented an explicit function-field version of the
Howgrave-Graham algorithm, including a generalization from the rational func-
tion field F
q
m
(x) to arbitrary function fields; this generalization includes list
decoding for algebraic-geometry codes. In the case of Reed{Solomon codes, [
23
,
Section 6] reaches the big-field Johnson bound with cost only n
2+3+o(1)
. Cost
bounds for other choices of ` can also be extracted straightforwardly from the
analysis in [
23
] and match the cost bounds shown in this section. However, this
generalization still does not cover the Koetter{Vardy results.
Correcting Nearlyn
0
p
n
0
(n
0
t
1) Errors
4
This section states a simple high-speed list-decoding algorithm that corrects
errors up to the F
q
Johnson bound.
Parameters. The algorithm has four parameters: a positive integer w n, the
number of errors to be corrected; an integer j 0; an integer k j; and an
integer ` (q 1)j + k. The algorithm assumes that t + 1 n and that these
parameters satisfy
n
k(k + 1)
2
+ n(q 1)
j(j + 1)
2
+ (nt 1)
`(` 1)
2
< `(k(nw) + jw);
i.e., (1(t + 1)=n)(11=`) < (1w=n)
2
+ (w=n)
2
=(q1)(1w=nk=`)
2
(w=n (q 1)j=`)
2
=(q 1) k=`
2
(q 1)j=`
2
.
Suitable j;k;` exist whenever w is smaller than the F
q
Johnson bound, as
discussed below. The special case j = 0 of this algorithm (with the computations
of B and E straightforwardly eliminated) is exactly the algorithm of the previous
section, and is usable only when w is smaller than the big-field Johnson bound.
The asymptotic cost bounds for this algorithm, as functions of n;`;q
m
, are
exactly as in the previous section: for example, the cost is bounded by n(lg in
O(1)
if ` 2 (lg n)
O(1)
and lg q
m
2 (lg n)
O(1)
, and is bounded by n
+2+o(1)
if ` 2 O(n)
and lg q
m
2 O(n
).
Input and output. The algorithm input is a vector v 2 F
q
. The algorithm
output is the set of c 2 C of Hamming distance at most w from v.