Graphics Reference
In-Depth Information
Eye Coordinates
Eye coordinates are what model coordinates become when they have been
transformed into the scene where they belong and then transformed so that they
are given with respect to the eye's viewing coordinate system. The eye coor-
dinates for a vertex can be computed from the per-vertex variable aVertex by
uModelViewMatrix * aVertex
and the eye coordinate version of the normal vector is computed from aNormal
by
uNormalMatrix * aNormal
Both of these computations were used in early glman examples in the
chapter on shader concepts.
You might use eye coordinates in case you want to present information
from the viewer's point of view, and in that case you might develop a pro-
cedural texture based on eye-coordinate information. Textures based on eye
coordinates can include eye-linear one-dimensional textures, discussed in the
ChromaDepth example later in this chapter. This idea will also be used to cre-
ate a 3D “data probe” in the visualization chapter.
Fragment Shader Processing
Outputs from Fragment Shaders
The primary output from the fragment shader is the same as that from the
fragment-processor in the fixed-function pipeline: pixel color, ready to be pro-
cessed by the remaining pixel operations and then writen into the framebuf-
fer. The fragment shader can also produce a depth value for each pixel, which
can be useful if you want to compute a depth that is different from the usual
interpolation of the vertex depths.
Replacing Fixed-Function Processing
with Fragment Shaders
Before we start thinking of developing sophisticated kinds of fragment shad-
ing, we should stop to ask how we would implement the fixed-function kinds
of shading we get from ordinary OpenGL. Sometimes this is easy, but some-
 
Search WWH ::




Custom Search