Information Technology Reference
In-Depth Information
log N
satisfy ˄ (
)
N
. Moreira [ Mor97 ] improved this by showing that this
1
+ ˇ
(
log log N
)
holds even for ˇ =
0. (He also showed that for all ˇ >
0, there is an N ˇ such that for
(
1
+ ˇ)
log N
all N
). And yet, showing such lower bounds for specific
numbers seems quite hard - the classic “searching for hay in a haystack” paradox.
Let's move over from individual numbers to sequences of numbers. Let
N ˇ , ˄ (
N
)
(
log log N
)
a n ) n 1
be some sequence of natural numbers. When can we say that the sequence is easy
to compute? Each number in the sequence should be “easy” relative to its position
in the sequence. That is, the sequence
(
, should not grow
very fast. One possible definition is that b n should be polynomially bounded in n .
For instance, for a n =
(
b n )
, where b n
= ˄ (
a n )
2 2 n , we know that b n =
n . Is that not moderate growth? Not
really. Consider a function that maps a position n to not just the number ˄ (
a n ) =
b n
2 2 n
but to an SLP of size b n computing a n . For the sequence
(
)
, this function takes an
input n represented in
bits, and outputs a circuit of size n , that is, exponential
in the size of the input. That's not moderate growth!
OK, so let's say that a sequence
(
log n
)
(
a n )
is easy to compute if for some polynomial
(.)
, for each n , ˄ (
a n )
(
)
p
p
log n
, and otherwise it is hard to compute. We've set
2 2 n
2 n
(
)
(
)
(
)
up this definition so that
are
easy to compute. Makes sense? Now let's ask, what other sequences are easy? And
what sequences are hard?
A sequence with famously open status is
is hard to compute, while the sequences
n
,
(
n
! )
. The completely naive SLP that
constructs the first n numbers with n
2 increments and then multiplies them shows
that ˄ (
4. But can this be im pr oved significantly? Or is this sequence hard?
The best we know is that ˄ (
n
! )
2 n
( n log 2 n
;see[ BCS97 ]. Here is the interesting
connection to algebraic circuit complexity. Building on a sequence of constructions
by Cheng [ Che04 ] and Koiran [ Koi05 ], Bürgisser [ Bür09 ] showed that if
n
! )
O
)
(
n
! )
is
hard to compute, then any algebraic circuit for the
(
Perm n )
family that uses only
the constants
1
,
0
,
1 must be of superpolynomial size. If we can't even compute
the numbers n
easily, then we cannot compute the polynomials Perm n efficiently,
unless we allow the use of constants that cannot themselves built up efficiently.
Analogous to the ˄ complexity of natural numbers, we can define the ˄ complexity
of polynomial families. Let ˄ (
!
f
)
denote the size of the smallest algebraic circuit
using only the constants
1
,
0
,
1—call such a circuit constant-free —and computing
f . We say that the family
(
f n )
has polynomially bounded ˄ complexity if for some
polynomial p
(
n
)
, and for each n , ˄ (
f n )
p
(
n
)
. Bürgisser's result can now be stated
as: if ˄ (
is easy to compute.
Let's examine this a bit closely. Why do we state the hypothesis as “ ˄ (
Perm n )
is polynomially bounded, then
(
n
! )
Perm n )
is
polynomial”? Is this not equivalent to saying
(
Perm n )
is in VP, and hence VNP
=
(
Perm n )
VP? Actually, it may not be equivalent. It is possible that
has polynomial-
sized circuits but no polynomial-sized constant-free circuits. Conceivably, using other
constants in intermediate computation and then cancelling them out could help.
Recall that the proof of VNP-hardness of
(
Perm n )
uses constants other than
1
,
0
,
1;
1
2 is needed. (As another example, recall how in showing that Det n is a projection
of SymDet n , we needed the constant 1
/
/
2, even though all coefficients in Det n are
 
Search WWH ::




Custom Search