Graphics Reference
In-Depth Information
37.4 Overview of k -dimensional Structures
The remainder of this chapter discusses four classes of spatial data structures.
These correspond fairly closely to k -dimensional generalizations of classic one-
dimensional lists, trees, arrays, and hash tables under the mapping shown in
Table 37.1. In that table, the actual intersection time depends on the spatial distri-
bution of the elements and the intersection query geometry, but the big-O values
provide a useful bound.
Table 37.1: Approximate time and space costs of various data structures
containing n elements in k dimensions. The grids have g subdivisions per
axis, which is not modeled by these expressions.
1D Data Structure
k d Analog
Intersection Time Total Space
List
List and Array
O( n )
O( n )
O(log n )
O( n )
Tree
BVH and BSP tree
Array of cells
Grid of cells
O( n
/
g k )
O( g k + n )
Hash table
Hash grid
O( 1 )
O( n )
We can build some intuition for the runtimes of spatial data structures: Oper-
ations should take linear time on lists, log time on trees, and constant time on
regular grids. The space requirements of a regular grid are problematic because
the grid allocates memory even for areas of the scene that contain no geometry.
However, using a hash table, we can store only the nonempty cells. Thus, we can
expect list, tree, and hash grid structures to require only a linear amount of space
in the number of primitives that they encode.
With some knowledge of the scene's structure, one can often tune the data
structures to achieve amortized constant factor speedups through architecture-
independent means. These might be factors of two to ten. Those are often nec-
essary to achieve real-time performance and enable interaction. They may not
be worthwhile for smaller data sets or applications that do not have real-time
constraints.
All of these observations are predicated on some informal notions of what
is a “reasonable” scene and set of parameters. It is also possible to fall off the
asymptotic path if the scene's structure is poor. In fact, you can end up making
computations slower than linear time. The worst-case space and time bounds for
many data structures are actually unbounded. To address this, for each data struc-
ture we describe some of the major problems that people have encountered. From
that you'll be able to recognize and address the common problems. That is, the
time and space bounds given in the table are only to build intuition for the general
case. As discussed in the previous section, one has to assume certain distributions
and large problems to prove these bounds, and doing so is probably reductive.
Understanding your scenes' geometry distributions and managing constant factors
are important parts of computer graphics. Concealing those behind assumptions
and abstractions is useful when learning about the data structures. Rolling back
assumptions and breaking those mathematical abstractions are necessary when
applying the data structures in a real implementation.
In addition to algorithmic changes and architecture-independent constant-
factor optimizations, there are some big (maybe 50 x over “naive” implementa-
tions) constant-factor speedups available. These are often achieved by minding
details like minimizing memory traffic, avoiding unnecessary comparisons, and
 
 
 
Search WWH ::




Custom Search