Graphics Reference
In-Depth Information
Matrix concatenation. When taken together, these three types of transformations can
be concatenated into a single matrix. These transforms are applied in the order shown in
Equation (8.4) when multiplying the vertex positions as a row vector on the left side of
the matrices. The rotation transformation is assumed to be the concatenation of the three
individual rotations, which together determine the orientation of the model. The order that
these transformations are performed in is scaling, then rotation, then translation:
W=S *R*T.
(8.4)
By concatenating all of the matrices into a single one, it can be applied to the model
in a single matrix multiplication, instead of in several individual ones. Since this combined
transform converts the model from its native object space to the scene's world space, it is
commonly referred to as the world matrix. Further consideration must be made for trans-
forming the normal vectors of the model. Since the normal vectors are vectors and not
positions, it does not necessarily produce the correct results if transformation with a scaling
component is used. For a transformation matrix containing a uniform scaling, normalizing
the result of the transformation is sufficient to produce the correct normal vector. However
if the transformation contains a non-uniform scaling, the normal vector must be trans-
formed with the transpose of the inverse of the transformation used for the vertex positions.
This is shown in Equation (8.5):
N=(W -1 ) T .
(8.5)
View Space
Once the model's vertices have been positioned within the scene's world space, we need
to again reposition them, according to where they are being viewed from. This is typically
performed by applying a view space transformation matrix, which represents a translation
and rotation of the world space coordinates to place them into a frame of reference relative
to the virtual camera that the scene is being rendered from. To construct a typical view
matrix, we can use the formula shown in Equation (8.6):
V = T *R z *R y *R x .
(8.6)
In this case, the translation actually is the negated position of the camera. This is be-
cause we are moving the world with respect to the camera instead of moving an object with
respect to the world, as we did in the world space section. Likewise for the rotations, each
of them represents the opposite of the rotation amount that would be applied to the camera
ifit were being rendered. Direct3D 11 provides the D3DXMatrixLookAtLH() C++function
in the D3DX library for constructing a view matrix based on more intuitive parameters,
such as the location of the camera and the point it is looking at. Figure 8.4 shows where a
Search WWH ::




Custom Search