Graphics Reference
In-Depth Information
It is important to note that the size of the tree has no upper bound . There are
two reasons for this, both of which can easily occur in practice. First, one can
add partitions independent of the number of primitives. This may be useful if one
expects additional primitives to be inserted later. This may occur naturally from
an implementation that chooses not to remove nodes when primitives move out of
them (e.g., for time efficiency in move and delete operations).
Second, in an implementation that splits primitives, poor choice of partition
planes can significantly amplify the number of primitives stored. As discussed in
Section 37.6.2, choosing partitions is a hard problem. For arbitrary scenes there is
no reason to expect that the partitions can be chosen to avoid substantial amplifi-
cation of primitives. Since sufficient amplification of the number of primitives can
cancel any benefit from efficient structure, one must be careful when choosing the
partition planes.
Most algorithms on trees assume that they have been built reasonably, with
fewer than 2 n nodes.
A balanced BSP tree can locate the leaf containing a point in O(lg n ) com-
parisons, which is why it is common to think of tree operations as being log-time.
If the tree is unbalanced but has reasonable size, this may rise as high as O( n )
comparisons, as in the degenerate tree shown in Figure 37.10. For a tree with an
unreasonable size there is no upper bound on the time to locate a point, just as
there is no bound on the size of the tree.
Our common intersection problems—first-ray intersection, conservative ball
intersection, and conservative box intersection—are all at least as hard as locating
the convex region containing a point. So all intersection queries on the BSP tree
take at least logarithmic expected time for a balanced tree, yet have no upper
bound on runtime if the tree structure is poor. The volume (i.e., ball and box)
queries are also necessarily linear in the size of the output, which can be as large
as n .
The actual runtime observed for intersection queries generally grows as the
amount of space covered by the query geometry increases. Fortunately, it usually
grows sublinearly. The basic idea is that if a ray travels a long distance before it
strikes a primitive, it probably passed through many partitions, and if it travels a
short distance, it probably passed through few. The same intuition holds for the
volume of primitives.
A good intuition is that the runtime might grow logarithmically with the extent
of the query geometry, since for uniformly distributed primitives we would expect
partitions to form a balanced tree. However, one would have to assume a lot about
the distribution of primitives and partitions to prove that as a bound.
The idea of the spatial distribution of primitives is very important to the the-
ory supporting BSP trees, despite its confounding effects on formal analysis. One
typically seeks to maintain a classic binary search tree in a balanced form to min-
imize the length of the longest root-to-leaf path and thus minimize the expected
worst-case search behavior.
Balance (and big-O analysis) helps to minimize the worst-case runtime. But
under a given pattern of queries and scene distribution, you might want to find
the tree structure that minimizes the overall runtime. While your algorithms book
stresses the first one, what actually matters in practice is the second one.
For example, a balanced spatial data structure tree is rarely optimal in prac-
tice [Nay93]. This at first seems surprising—classic binary tree data structures are
usually designed to maintain balance and thus minimize the worst case. However,
 
Search WWH ::




Custom Search