Graphics Reference
In-Depth Information
How it works…
The GBuffer class is used to initialize a new render target for each DXGI format that
is passed to the constructor. These render target textures are created with both the
BindFlags.ShaderResource and BindFlags.RenderTarget binding flags specified,
allowing them to be used as RTVs for our PSFillGBuffer pixel shader and also as SRVs
for retrieving the G-Buffer attributes in our future deferred shaders.
This means that in our textures we can only use DXGI formats that are compatible with both
RTVs and SRVs. For example, Direct3D 11.1 compatible hardware can optionally support the
SharpDX.DXGI.Format.R32G32B32_Float format for render targets, whereas they must
support the SharpDX.DXGI.Format.R32G32B32A32_Float format.
To check the format support at runtime, use the Device.CheckFormatSupport function,
as shown in the following example:
FormatSupport fs = device.CheckFormatSupport(
SharpDX.DXGI.Format.R32G32B32_Float);
if ((fs & FormatSupport.RenderTarget) ==
FormatSupport.RenderTarget)
{
... format is supported for render targets
}
We also create a depth stencil buffer for the G-Buffer, using a Typeless format of SharpDX.
DXGI.Format.R32G8X24_Typeless for the underlying texture, so that it can be used with
both a DSV and an SRV. For the SRV, we then use SharpDX.DXGI.Format.R32_Float_
X8X24_Typeless making the first 32 bits available within our shader while the remaining 32
bits are unused. The DSV uses a format of SharpDX.DXGI.Format.D32_Float_S8X24_
UInt , utilizing the first 32 bits as the depth buffer, the next 8 bits as the stencil and leaving
the remaining 24 bits unused. We have added the View , InverseView , Projection , and
InverseProjection affine transform matrices to the PerObject structure so we can
transform between view-space and world-space, and clip-space and view-space.
When we read the G-Buffer attributes again, we will be reconstructing the position into
view-space. Rather than applying a transformation to bring the position to world space for
lighting calculations, it is more efficient to leave them in view-space. This is why we have also
transformed the normal and tangent vectors into view-space. It doesn't matter in what space
the calculations are performed but generally, you want to do lighting in the space that requires
the least amount of transformations and/or calculations.
For our PSFillGBuffer pixel shader, we have described the output structure
GBufferOutput using the SV_Target output semantic on each property to control which
render target is filled, using SV_Target0 for the first render target, SV_Target1 for the
second, and so on up to a maximum of eight targets. The pixel shader performs standard
operations, such as normal mapping and texture sampling, and then assigns the attributes to
the appropriate render target property in the GBufferOutput structure.
 
Search WWH ::




Custom Search