Graphics Reference
In-Depth Information
constant buffer is also bound. When the pixel shader executes, the object's material color
is found in the same way as before, and the amount of light visible at each pixel location is
calculated, based on the normal vector passed into the pixel shader as an input attribute and
the light direction vector. The result of the lighting calculation is then used to modulate the
color of the fragment. As each additional object is rendered, the process is repeated exactly
with each of its material properties, and the resulting rendering now incorporates lighting
in addition to material colors.
In this case, we see that the lighting information is the same throughout the scene and
hence is applied to each of our three objects in the same way. Environmental data is the
same for all objects that share the same environment. We also see that even though each of
the three objects has a different material appearance, they all interact with the light in the
same way, using the same calculation regardless of what color they are.
Generalizing the example. So what have we learned from this example, and how can we
apply the results of these simple experiments to understand the general concept of using
the pixel shader to implement rendering techniques? The pixel shader program is simply a
function that takes a certain number of input arguments and produces a color. Some of the
inputs are changed at every pipeline execution, such as material properties, and some of
the inputs are changed once per rendered frame, such as the lighting properties. Still others
won't change at all throughout an application's lifetime, such as the vertex normal vectors
of a model.
The flexibility provided by the pixel shader becomes quite clear now. It does not re-
ally matter what the function is that calculates the color of a generated fragment. As long
as it produces a color result that varies appropriately when the input is varied, the rendering
model serves its purpose. Developing a different rendering model revolves around decid-
ing what inputs should be used to calculate the output color, developing the function that
carries out the mapping from input to output, and then producing the geometric content
that fits into the rendering model. More complex rendering models may require more in-
puts, including the possibility of using dynamically generated inputs such as the result of
additional rendering passes. However, the pixel shader itself always resolves to a way to
convert the input data to an output color.
Using Unordered Access Views
The previous example demonstrated how the pixel shader can be used to implement a ren-
dering model. In this section, we consider what kind of additional possibilities are made
available by the inclusion of the unordered access views to the pixel shader. The UAVs
allow the various resource types to be read or written at any location, by any invocation of
the pixel shader. Before the addition UAVs, the pixel shader stage was restricted to only
reading from resources, with the exception of writing its output color(s) and depth to the
fragment location.
Search WWH ::




Custom Search