Graphics Reference
In-Depth Information
This is done by swapping the near and the far planes in the projection matrix,
changing the depth testing to greater than and equal instead of lesser than and
equal. Then, we could clear the depth buffer to black instead of white at each
frame because 0.0 is where the far plane is now. This will turn 1.0 to the near
plane in the depth buffer and 0.0 to the far plane. There are two nonlinearities
here: one from the post-perspective depth and one from the floating point. Since
we reversed one, they basically cancel each other out, giving us better distribution
of the depth values.
Keep in mind that reversing the depth buffer affects our Hi-Z construction
algorithm as well.
One should always use a 32-bit floating-point depth buffer; on AMD hardware
the memory footprint of 24-bit and 32-bit depth buffers is the same, with which
the fourth generation consoles are equipped also.
Another technique that can be used to improve depth precision is to actually
create the Hi-Z buffer over a view-space Z depth buffer. We would need to
output this in the geometry pass into a separate render target because recovering
it from a post-perspective depth is not going to help the precision. This gives
us uniformly distributed depth values. The only issue with a view-space Z depth
buffer is that since it's not post-perspective, we can't interpolate it in screen
space. To interpolate it we would have to employ the same technique as the
hardware interpolator uses. We take 1/Z and interpolate it in screen space and
then divide this interpolated value again by 1/Z' to recover the final interpolated
view-space Z. However, outputting a dedicated linear view-space Z buffer might
be too costly. We should test a reversed 32-bit floating-point depth buffer first.
The cone-tracing calculations are also a bit different with a view-space Z buffer.
We would need to project the sphere back into screen space to find the size it
covers at a particular distance. There are compromises with each technique.
4.6.4 Approximate Multiple Ray Bounces
Multiple bounces are an important factor when it comes to realistic reflections.
Our brain would instantly notice that something is wrong if a reflection of a
mirror didn't have reflections itself but just a flat color. We can see the effect of
multiple reflections in Figure 4.24.
The algorithm presented in this chapter has the nice property of being able
to have multiple reflections relatively easily. The idea is to reflect an already
reflected image. In this case the already reflected image would be the previous
frame. If we compute the reflection of an already reflected image, we'll accumulate
multiple bounces over time. (See Figure 4.25.) But since we always delay the
source image by a frame, we'll have to do a re-projection of the pixels. To
achieve this re-projection, we'll basically transform the current frame's pixel into
the position it belonged to in the previous frame by taking the camera movement
into account [Nehab et al. 07].
Search WWH ::




Custom Search