Graphics Reference
In-Depth Information
perObject.Transpose();
context.UpdateSubresource(ref perObject, perObjectBuffer);
context.PixelShader.SetConstantBuffer(0, perObjectBuffer);
saQuad.Shader = gBufferNormalPS;
saQuad.Render();
The result of running the above code will be similar to that shown in the top-right image
of the G-Buffer contents from Filling the G-Buffer .
There is an obvious difference between rendering a debug view of
world-space and view-space normals. If you use view-space normals and
rotate the camera, the color of the rendered normals will change, whereas
world-space normals will remain static regardless of camera rotation.
How it works…
The C# render loop simply fills the G-Buffer and then renders a debug view of the normals by
assigning the G-Buffer resources to a ScreenAlignedQuadRenderer instance along with
our debug pixel shader. To make the additional matrices available within the pixel shader,
we have bound perObjectBuffer to the first constant buffer slot of the pixel shader stage.
Retrieving the information from the G-Buffer within the pixel shader is quite self-explanatory.
The pixel shader calls the ExtractGBufferAttributes HLSL function, passing in the
textures to load the attributes from. When working with the G-Buffer, we generally have a
one-to-one relationship between the rendered pixel and the value retrieved from the G-Buffer.
Therefore we can use the SV_Position input semantic of the PixelIn structure to load
the appropriate information from the provided shader resources using the Texture2D.Load
function, bypassing the need to provide a sampler.
Unpacking the normal involves retrieving the low and high bits of the normal sample with bit
shifting and the f16tof32 HLSL intrinsic function. We then decode the azimuthal projected
coordinate using the inverse of the Lambert azimuthal equal-area projection described in
Filling the G-Buffer .
Reconstructing the position from depth involves a little bit more work. We do this by
calculating the projected position. The X and Y values are derived from pixel.UV and using
the existing non-linear depth sample as Z. We can then simply transform the position with the
inverse of the projection matrix (the PerObject.InverseProjection matrix), and then
apply the perspective divide to calculate our final view-space position.
 
Search WWH ::




Custom Search