Information Technology Reference
In-Depth Information
We can state this as a question about circuits. Each way of building up N is an
arithmetic circuit, or a straight-line program (SLP), that uses no constants other than
1 and 2. Let us denote by ˄ (
the size of the smallest such circuit computing N .
(This is the ˄ complexity of N ). By definition, ˄ (
N
)
1
) = ˄ (
2
) =
0, and for all N
>
2,
˄ (
N
)>
0. Algorithms for computing N give upper bounds on ˄ (
N
)
. For instance,
2 k , here's an SLP: g 0 =
to compute N
=
2, g i + 1 =
2
× g i for 0
i
k
2. Clearly,
g i computes 2 i + 1 ,so ˄ (
2 k
1. But I'm sure you can already see better ways of
doing this. From the circuit viewpoint, an explanation of why this is not the best is
that the circuit corresponding to this SLP is skew. Surely, we should be able to use
non-skew gates and compute large numbers faster. Here's another SLP that computes
big numbers fast: f 0 =
)
k
2, f i + 1 =
f i ×
f i for 0
i
ʲ
1. Clearly, f i computes
2 2 i ,so ˄ (
2 2 ʲ
, a much better bound than the earlier 2 ʲ
1 at least for numbers
of this form. Note that the way we used non-skewness, we produced a circuit with
exponential formal degree (the degree at f i is 2 i ), but we're not worried about that for
now. Now, using these compact circuits for 2 2 ʲ , we can build a better circuit for 2 k by
just using the binary expansion of k : k
) ʲ
= i = 0 b i 2 i , where t
=ₔ
log k
and b t =
1.
= i : b i = 1 2 2 i . Compute all the double powers
using t operations, and then multiply the required ones using at most t operations.
Overall, ˄ (
2 i = 0 b i 2 i
= i = 0 2 b i × 2 i
So 2 k
=
2 k
.
We can use the same binary expansion idea to compute any N , not just a power of
2. Compute all powers of 2 upto log N , and add the required ones. This shows that
for all N , ˄ (
)
2 t
=
2
log k
1.
So far we have not used any subtractions. But they can be very useful too. For
instance, ˄ (
N
)
2
log N
∀−
1; compute 2 2 ʲ and subtract 1.
What about a lower bound? We can actually formalise the intuition that the expo-
nential degree circuits we saw above for 2 2 ʲ produce the largest possible number in
that size. Hence, for any N , ˄ (
2 2 ʲ
1
) ʲ +
N
)
log log N .
2 2 ʲ
. That sounds impressive - we know the exact value of
˄ for 2 2 ʲ . But essentially just for that; for all other numbers, we still seem to have a
pretty large gap. If N
In particular, ˄ (
) = ʲ
2 k , then log log N
=
˄ (
N
)
2
log k
∀=
2
log log N
,so
we know ˄ (
N
)
within a factor of 2. But for general N , all we know is log log N
˄ (
1. Howcanwe reduce this gap?An obvious search for an efficient
way where the last operation is
N
)
2
log N
∀−
+
or
is to express N as M
±
k , compute M , compute
k
(
N
M
)
, and combine, and to choose M that minimizes ˄ (
M
) + ˄ (
k
) +
1.
(A similar approach can be used for factors of N and a
×
as the last operation.)
But in computing M and
M ), the complexity may be subadditive
since we can reuse intermediate numbers from the program for M while computing
± (
± (
N
M
)
(or N
/
M . (We are looking for circuits, not formulas.) It is identifying the
extent of this reuse that is a challenge.
Similar to Shannon's bound for functions and circuits (most functions require
exponential-sized circuits), de Melo and Svaiter [ dMS96 ] showed that most numbers
N have ˄ (
N
M
)
or N
/
N
)
closer to the upper bound. They showed that for every ˇ >
0, most N
Search WWH ::




Custom Search