Game Development Reference
In-Depth Information
depth value with the value already in the depth buffer for this pixel. If the
new depth is farther from the camera than the value currently in the depth
buffer, then the pixel is discarded. Otherwise, the pixel color is written
to the frame buffer, and the depth buffer is updated with the new, closer
depth value.
Before we can begin rendering an image, we must clear the depth buffer
to a value that means “very far from the camera.” (In clip space, this value
is 1.0). Then, the first pixels to be rendered are guaranteed to pass the
depth buffer test. There's normally no need to double buffer the depth
buffer like we do the frame buffer.
10.10.2 Delivering the Geometry
After deciding which objects to render, we need to actually render them.
This is actually a two-step process. First, we must set up the render context.
This involves telling the renderer what vertex and pixel shaders to use, what
textures to use, and setting any other constants needed by the shaders,
such as the transform matrices, lighting positions, colors, fog settings, and
so forth. The details of this process depend greatly on your high-level
rendering strategy and target platform, so there isn't much more specific we
can say here, although we give several examples in Section 10.11. Instead,
we would like to focus on the second step, which is essentially the top box
in Figure 10.37 , where vertex data is delivered to the API for rendering.
Nowadays a programmer has quite a bit of flexibility in what data to send,
how to pack and format each data element, and how to arrange the bits in
memory for maximum e ciency.
What values might we need to supply per vertex? Basically, the answer
is, “whatever properties you want to use to render the triangles.” Ulti-
mately, there are only two required outputs of the vertex and pixel shader.
First, the vertex shader must output a position for each vertex so that the
hardware can perform rasterization. This position is typically specified in
clip space, which means the hardware will do the perspective divide and
conversion to screen space coordinates (see Section 10.3.5) for you. The
pixel shader really has only one required output: a color value (which typ-
ically includes an alpha channel). Those two outputs are the only things
that are required. Of course, to properly determine the proper clip-space
coordinates, we probably need the matrix that transforms from model space
to clip space. We can pass parameters like this that apply to all the vertices
or pixels in a given batch of triangles by setting shader constants. This is
conceptually just a large table of vector values that is part of the render
context and for us to use as needed. (Actually, there is usually one set of
registers assigned for use in the vertex shader and a different set of registers
that can be accessed in the pixel shader.)
 
Search WWH ::




Custom Search