Graphics Reference
In-Depth Information
Bézier subpatches for a single original surface, it did not prevent cracks between
distinct trimmed surfaces. For example, the surface of the union of two solids might
lead to two trimmed surfaces and the algorithm applied to each might not produce
a common boundary since each surface may have used a different definition for that
boundary curve.
Kumar and Manocha ([KumM95] and [KumM94]) describe an algorithm that is
similar to the one in [RoHD89]. NURBS surfaces and NURBS trimming curves were
converted into a sequence of Bézier representations because Bézier representation
makes some computations simpler than they would be for B-spline representations.
One also gets better bounds on derivatives and curvature. The curves are polygonized
and a triangulation of the trimmed surface is generated via uniform subdivisions. The
uniformity greatly simplifies the work. What one needs therefore is to determine the
u- and v-step sizes for the surface tessellation and the step sizes for the trimming
curves. Two possible criteria for finding these are:
The Deviation Criterion: The triangles should approximate the surface and their
image in screen space should not deviate by more than a user specified bound from
the surface. This involves second derivative bounds.
The Size Criterion: The triangles should have a reasonable size with their edges
in screen space shorter than a predefined user tolerance. This only involves first
derivative bounds.
Even though the size criterion may not work well on small patches that have a
large variation in their curvature, it was used in [KumM95] because it is expensive to
compute second derivatives for rational surfaces. uv-Regions were not used and
coving was only done in rectangles that are intersected by trimming curves. The algo-
rithm was simpler than the one in [RoHD89] and produced fewer triangles. Cracks
and singularities were avoided not only between patches but also between surfaces.
To avoid cracks between surfaces one considered the trimming curves in R 3 and, once
the matching surfaces were found, one used only one representation for both curves.
The rendering algorithm described in [KumM94] computed the polygonization
dynamically based on the viewing parameters, used back-patch culling (an approxi-
mation to the normal was used for efficiency), and made use of spatial and temporal
coherence between frames.
The algorithm by Luken in [Luke96] is another that tried to avoid some of the
problems that arose in [RoHD89]. In the Rockwood et al. algorithm, a surface was
divided into patches and the trimming regions intersected with each patch to get a
new collection of subpatches and trimming curves for them. See Figure 14.13(a).
Because each subpatch was rendered independently, one had to do extra work so that
no cracks appeared between patches. The coving done by [RoHD89] was avoided in
[Luke96] by not dividing a surface into subpatches but defining a subdivision grid for
the entire surface. One polygonized the trimming curves once for the entire surface
and did not have to find intersections with subpatch boundaries. The uv-domain of
the surface was divided into v-intervals that produced horizontal slices over the whole
u domain. See Figure 14.13(b). Trimming polygons are clipped to these slices. A
uniform subdivision of the u-parameter then subdivides each slice into rectangles,
each of which is then handled separately, although the handling of those introduced
Search WWH ::




Custom Search