Graphics Reference
In-Depth Information
contrast to hidden-surface removal activities performed by the GPU—such as
back-face culling as described in Section 36.6, or occlusion culling on a per-pixel
level through the depth buffer as described in Section 36.3.
Together, the high-level (CPU-side) and low-level (GPU-side) culling activi-
ties work together toward a common goal of reducing scene complexity and thus
GPU workload.
It is tempting to think of pre-GPU culling as unnecessary, that as GPUs
become more powerful and bandwidth increases, there is less justification for
throwing CPU resources at the problem of determining the potentially visible set.
However, the GPU-side visible surface determination has a cost that is linear in the
number of primitives. Thus, when one considers a Boeing 777 model—which has
more than 100,000 unique parts and several million fastener parts—it becomes
obvious that there continues to be a need for optimizing the sequence of com-
mands and data sent to the hardware rendering pipeline.
Below is an unordered list of modules of this category, many of which require
spatial data structures as discussed below in point 3(b):
View-Frustum Culling: As explained in Chapter 13, the camera's location
and viewing parameters determine the geometry of the view frustum, and only
geometry lying inside the frustum is visible. This culling stage seeks to identify
and eliminate portions of the scene that lie wholly outside the frustum. Implemen-
tation is typically performed via arrangement of the scene's contents in a Bound-
ing Volume Hierarchy (BVH), as described in Section 36.7; however, other data
structures (e.g., BSP trees discussed in Section 36.2.1) have been used for certain
situations (e.g., static scenes).
Sector-Based Culling: In many applications, the scene's environment is
architectural, that is, located in the interior of a building, with walls segment-
ing space into “sectors” and windows/doors creating “portals” that connect adja-
cent sectors. A number of algorithms, described in Section 36.8 and sometimes
called portal culling techniques, are available to cull objects in these kinds of
environments.
Occlusion Culling: Consider a scene modeling midtown Manhattan, seen
from the point of view of a pedestrian at just one intersection. If the depth of
the view frustum covers many city blocks, each visible surface, especially those
close to the viewer, is occluding a very large number of objects. In these types of
environments, there can be great advantage in removing these occluded objects.
Contribution Culling/Detail Culling: A visible primitive, or an entire sub-
portion of the scene, may be too small and/or too far away to make an impact
on the rendering. This culling step is designed to detect and dismiss such content.
Some applications might choose to use this type of culling only when the viewer is
in motion, since the absence of small objects will very likely go unnoticed during
dynamics but may be detectable when the camera is at rest.
16.4.2.1(c) Reducing the Transmission/Rendering
Cost of Geometric Shapes
In this set of activities, complex geometric shapes specified via meshes either are
encoded to reduce the GPU-side rendering cost or the size of the data buffers
needed to transfer the specification to the GPU, or are simplified by reducing the
mesh's complexity (e.g., reducing the number of triangles and vertices).
Reencoding is the act of converting the mesh's specification to one that is
more quickly processed by the graphics hardware. For example, converting to
 
Search WWH ::




Custom Search