Cryptography Reference
In-Depth Information
introduced by Gorenstein and Zierler in [ 33 ]; and classical Goppa codes, which
had been introduced by Goppa in [ 31 ] and [ 32 ].
The w-error-correction problem for C is the problem of nding c 2 C, given a
vector at distance w from c. For w bt=2c the vector dictates a unique possibility
for c, but this does not mean that c is easy to find. There are at least q nmt
codewords, and the cost of enumerating them all is exponential in n, except in
the (rarely used) case that t is very close to n=m. Fortunately, early research
produced much better algorithms for the bt=2c-error-correction problem:
Peterson in [ 46 ] introduced an algorithm using in O(1) arithmetic operations
in F q m . Each of those operations uses a polynomial number of bit operations,
under the extremely weak assumption that q m has n O(1) bits. Applications
typically choose q m to have only O(lg n) bits.
Berlekamp in [ 7 ] introduced an algorithm using only O(n 2 ) operations in
F q m . If q m has (lg in O(1) bits then each operation in F q m uses (lg in O(1) bit
operations, so Berlekamp's algorithm uses n 2 (lg n) O(1) bit operations.
Justesen in [ 42 ], and independently Sarwate as reported in [ 48 ], introduced
an algorithm using only n(lg in 2+o(1) operations in F q m . If q m has only
(lg n) O(1) bits then this algorithm uses only n(lg n) O(1) bit operations.
What about w > bt=2c? The big-eld Johnson b ound states t hat there are only
polynomially many possibilities for c if w < n p n(nt 1), assuming t+ 1
n. Guruswami and Sudan, in a famous 1998 paper [ 35 ], introduced a polynomial -
time algorithm to compute the list of possibilities for c if w < n p n(nt 1).
An intermediate range of w was already covered by an algorithm of Sudan in
[ 50 ], but [ 35 ] introduced \multiplicities" to push w much higher.
Even better, the F q Johnson bound s tates that the re are only polynomially
many possibilities for c if w < n 0 p n 0 (n 0 t 1) where n 0 = n(q 1)=q,
assuming t + 1 n 0 and q 2 n O(1) . In 2000 Koetter and Vardy introduced a
polynomial-time algorithm to compute the list of possibilities; see [ 34 , Section
6.3.8]. Compared to the unique-decoding case w = bt=2c, the big-eld Johnson
bound extends the distance by approximately t 2 =8n, and the F q Johnson bound
further extends the distance by approximately t 2 =8n(q 1); this improvement
is particularly impressive for q = 2.
Unfortunately, \polynomial time" does not mean fast. Several subsequent
papers have improved the complexity of list decoding, but each paper fails at
least one, if not all, of the following desiderata:
Speed. For example, the recent paper [ 4 ] reports list-decoding cost \quadratic
in the blocklength n" (counting the number of operations in F q m ); but this
is asymptotically much larger than the n(lg n) 2+o(1) that had been achieved
decades earlier for w = bt=2c.
Effectiveness (how many errors are decoded). For example, the recent paper
[ 52 ] is limited to the big-field Johnson distance, significantly below the F q
Johnson distance if q is small.
Simplicity. For example, [ 3 ]|one of the few papers reporting essentially-
linear-time list decoding|is suciently general to handle arbitrary weights,
Search WWH ::




Custom Search