Game Development Reference
In-Depth Information
Some typical pieces of information that are stored at the vertex level
include
Position. This describes the location of the vertex. This can be a 3D
vector or a 2D screen-space position, or it could be a position already
transformed into clip space that is simply passed directly through the
vertex shader. If a 3D vector is used, the position must be transformed
into clip space by the current model, view, and projection transforms.
If 2D window coordinates (ranging according to the resolution of the
screen, not normalized) are used, then they must be converted back
into clip space in the vertex shader. (Some hardware allows your
shader to output coordinates that are already projected to screen
space.)
If the model is a skinned model (see Section 10.8) , then the positional
data must also include the indices and weights of the bones that
influence the vertex. The animated matrices can be delivered in a
variety of ways. A standard technique is to pass them as vertex
shader constants. A newer technique that works on some hardware is
to deliver them in a separate vertex stream, which must be accessed
through special instructions since the access pattern is random rather
than streaming.
Texture-mapping coordinates. If we are using texture-mapped trian-
gles, then each vertex must be assigned a set of mapping coordinates.
In this simplest case, this is a 2D location into the texture map. We
usually denote the coordinates (u,v). If we are using multitexturing,
then we might need one set of mapping coordinates per texture map.
Optionally, we can generate one or more sets of texture-mapping co-
ordinates procedurally (for example, if we are projecting a gobo onto
a surface).
Surface normal. Most lighting calculations need the surface normal.
Even though these lighting equations are often done per-pixel, with
the surface normal being determined from a normal map, we still
often store a normal at the vertex level, in order to establish the basis
for tangent space.
Color. Sometimes it's useful to assign a color input to each vertex.
For example, if we are rendering particles, the color of the particle
may change over time. Or we may use one channel (such as alpha)
to control the blending between two texture layers. An artist can
edit the vertex alpha to control this blending. We might also have
per-vertex lighting calculations that were done o ine.
Basis vectors. As discussed in Section 10.9, for tangent-space normal
maps (and a few other similar techniques) we need basis vectors in
Search WWH ::




Custom Search