Graphics Reference
In-Depth Information
We can observe the fact that the depth buffer values are linear in screen space,
due to perspective, by taking the partial derivatives, gradients, of them using ddx
and ddy instructions in Microsoft HLSL and outputting them as color values. For
any planar surface the result is going to be a constant color, which tells us a
linear rate of change the farther the planes are from the camera in screen space.
Anything that behaves linearly is also going to allow us to interpolate it, just
like the hardware, which is a very powerful fact. It's also the reason we did the
Hi-Z construction on the nonlinear depth buffer. Our ray tracing will happen in
screen space, and we would like to exploit the fact that the depth buffer values can
be interpolated correctly in screen space because they're perspective-corrected.
It's like the perspective cancels out this nonlinearity of the values.
In the case that one desires to use a view-space Hi-Z buffer and not a post-
projected buffer, one has to manually interpolate the Z-value just as perspective
interpolation does, 1/Z. Either case is possible and affects the rest of the passes
as mentioned earlier. We will assume that we use a post-perspective Hi-Z from
now on. Now that we know the depth buffer values can be interpolated in screen
space, we can go back to the Hi-Z ray-tracing algorithm itself and use our Hi-Z
buffer.
We can parameterize our ray-tracing algorithm to exploit the fact that depth
buffer values can be interpolated. Let O be our starting screen coordinate, the
origin, let the vector D be our reflection direction, and finally let t be our driving
parameter between 0 and 1 that interpolates between the starting coordinate O
and ending coordinate O + D :
Ray ( t )= O + D
t,
where the vector D and point O are defined as
D = V ss /V ss z ,
O = P ss + D
∗−
P ss z .
D extends all the way to the far plane now. The division by V z sets the Z-
coordinate to 1.0, but it still points to the same direction because division by
a scalar doesn't change a vector's direction. O is then set to the point that
corresponds to a depth of 0.0, which is the near plane. We can visualize this
as a line forming from the near plane to the far plane in the reflection direction
crossing the point we are shading in Figure 4.11.
We can now input any value t to take us between the starting point and ending
point for our ray-marching algorithm in screen space. The t value is going to be
a function of our Hierarchical-Z buffer.
But we need to compute the vector V and P first to acquire O and D . P is
already available to us through the screen/texture coordinate and depth. To get
V we need another screen-space point P , which corresponds to a point somewhere
Search WWH ::




Custom Search