Graphics Reference
In-Depth Information
end user, they usually are not general enough so that the deformation mechanisms
are applicable in any particular scenario and/or any particular mesh. Parallax
mapping does not work well with curved surfaces, while altering the vertices
structure in a mesh requires a high polygon count to look smooth.
Methods to circumvent this limitation use screen-space rendering techniques
to assess which pixels should be clipped. Clipping pixels in screen space to de-
form meshes is not in itself a new concept for games [Vlachos 10]. The Se-
quenced Convex Subtraction algorithm [Stewart et al. 00] and the Goldfeather
algorithm [Goldfeather et al. 86,Goldfeather et al. 89] are part of this group of
implementations, for example. However, these traditional CSG techniques are
usually implemented on the GPU [Guha et al. 03] through some sort of depth
peeling method [Everitt 01]. This has the disadvantage of relying on multiple
rendering passes proportional to the scene's depth complexity.
To address these limitations, this chapter presents a solution that allows
meshes to deform with a high degree of flexibility, by utilizing another mesh
that acts as the deforming component. Our solution is independent of the scene's
depth complexity, since it requires a constant number of passes. In addition,
departing from other CSG rendering methods [Stewart et al. 02] that assume
a particular scene topology, the proposed solution is generic and works for any
given scene. The algorithm still works even when deforming flat meshes, which in
the common use for CSG leaves a hole in the mesh, as opposed to deforming it.
2.3 Algorithm Overview
2.3.1 Creating the Per-Pixel Linked List
Initially the entire scene must be rendered using a shader that stores each incom-
ing fragment into a per-pixel linked list. This section will give a quick overview
of this concept; for a more comprehensive explanation, see [Thibieroz 11].
To create the linked list we use two unordered access view buffers (UAVs).
One is the head pointer buffer, while the other is a node storage buffer of linked
lists. The head pointer buffer is equivalent in dimensions to the render target.
Each pixel in the render target has an equivalent pixel in the head pointer buffer.
These two pixels are correlated by the pixel's ( x, y ) coordinate in the render
target. What this means is that each pixel in the render target will have only
one equivalent address in the head pointer buffer. All pixels in the head pointer
buffer either point to a location in the node storage buffer or to an invalid value.
If a pixel in the head pointer buffer points to a valid location in the node storage
buffer, then the node it points to is considered the first node in the per-pixel
linked list. Each valid node in the node storage buffer may point to another node
in the node storage buffer, thus creating a linked list of nodes.
As a new pixel is rendered, a lookup is made in the head pointer buffer for
its equivalent address. If it points to any valid location in the node storage
Search WWH ::




Custom Search