Information Technology Reference
In-Depth Information
Grigoriev [ 121 ] gave the first space-time lower bounds that apply to all graphs for a prob-
lem (see Corollary 10.4.1 ), the essential idea of which is generalized in Theorem 10.4.1 . Savage
[ 291 ] introduced the w ( u , v ) -flow measure used in this version of a theorem to derive lower
bounds on area-time tradeoffs for VLSI algorithms. Grigoriev [ 121 ] also established Theo-
rem 10.4.2 and derived a tradeoff lower bound on polynomial multiplication that is equiva-
lent to Theorem 10.5.1 on convolution. The improved version of Theorem 10.4.2 ,namely
Theorem 10.5.4 , is original with this topic.
Lower bounds using the Grigoriev approach explicitly require that the sets over which
functions are defined be finite. Tompa [ 331 , 332 ] eliminated the requirement for finite sets but
required instead that functions be linear. Using concentrator properties of matrices deduced
by Valiant [ 343 ], Tompa derived a lower bound on ST for superconcentrators that he applied
to matrix-vector multiplication and polynomial multiplication. He developed a similar lower
bound for the DFT. (See Abelson [ 2 ] for a generalization of some of these results to continuous
functions.) The lower bound of Theorem 10.5.5 uses Tompa's DFT proof but does not require
that straight-line programs be linear.
The result on cyclic shift (Theorem 10.5.2 )isduetoSavage[ 292 ]. (This paper also gener-
alizes Grigoriev's model to I/O-oblivious FSMs, extends JaJa's [ 147 ] space-time lower bound
for matrix inversion, and derives space-time lower bounds for transitive functions and banded
matrices.) The result on integer multiplication (Theorem 10.5.3 )isduetoSavageandSwamy
[ 294 ]. In [331] Tompa also obtained Theorem 10.5.6 on merging. Transitive functions de-
fined in Problem 10.22 were introduced by Vuillemin [ 355 ].
In [ 333 ] Tompa examined the graph associated with the algorithm for transitive closure
based on successive squarings described in Section 6.4 anddemonstratedthatitcanbepeb-
bled either in a polynomial number of steps or with small space, namely O (log 2 n ) , but not
both. Carlson [ 61 ] demonstrated that algorithms for convolution based on FFT graphs (see
Section 6.7.4 )requirethat T =Θ( n 3 /S 2 + n 2 (log n ) /S ) , which doesn't come close to
matching the lower bound of Theorem 10.5.1 . However, through the judicious replacement
of back-to-back FFT subgraphs in the standard convolution algorithm, Carlson [ 62 ] was able
to achieve the bounds T =Θ( n log S + n 2 (log S ) /S ) , which are optimal over all FFT-based
convolution algorithms and nearly as good as the T =Θ( n 2 /S ) bounds. (See also [ 63 ].)
Carlson and Savage [ 65 ] explored for a number of problems the size of the smallest graphs that
can be pebbled with a small number of pebbles and demonstrated a tradeoff between size and
space.
Pippenger [ 251 ] has surveyed many of the results described above as well as those on the
black-white pebble game described below.
Several extensions of the pebble game have been developed. One of these is the red-blue
pebble game discussed in Chapter 11 and its generalization, the memory hierarchy game.
Another is the black-white pebble game whose rules are the following: a) a black pebble can be
placed on an input vertex at any time and on a non-input vertex only if its predecessors carry
pebbles, whether white or black; b) a black pebble may be removed at any time; c) a white
pebble can be placed on a vertex at any time; d) a white pebble can be removed only if all its
predecessors carry pebbles. The placement of white pebbles models a non-deterministic guess.
The removal of a white vertex is allowed only when the guess has been verified. Questions
this game makes possible are whether the minimum space required for a graph is lower with
the black-white pebble game than with the standard game and whether for a given amount of
space, the time required is lower. The black-white game was introduced by Cook and Sethi
Search WWH ::




Custom Search