Graphics Reference
In-Depth Information
[ branch ]
if ( uiLevel == 0 )
{ // If we are at the finest level , sample the shadow
// map with PCF at location f3CurrUVAndDepth. xy
IsInLight =...
}
// Execute body of the loop from Algorithm 1, using
// fStep as the scaler for the step length ds ,
// update f3RlghIn , f3MieIn , f2NetDensFromCam
...
f3CurrUVAndDepth += f3UVAndDepthStep
fStep ;
uiSamplePos += 1 << uiLevel ;
fMarchedDist += fRayStepLengthWS
fStep ;
} //while( fMarchedDist < fRayLength )
} //for(uint Cascade ...)
// Add contribution from the ray section behind the shadow map
// Apply Rayleigh and Mie phase functions
Listing 2.2. Shader code for the optimized ray marching algorithm.
To improve performance, we skip one or two of the smallest cascades (global
variable g_StartCscd stores index of the first cascade to process). Since light
scattering effects are only visible at a large scale, this has negligible visual impact.
We also limit maximum distance that is covered by shadow cascades to 300 km.
Therefore we need to account for the scattering from the part of the ray behind
the largest cascade, which is accomplished by trapezoidal integration in the end
of the shader.
To distribute cascades, we use mixed logarithmic/linear partitioning. Note
that shadow cascades must cover all the view frustum, because visibility is queried
not only on surfaces, but in the whole visible volume. Optimized cascade distri-
bution techniques, like those based on determining minimum/maximum extents
of the visible geometry, should be used with care. Note also that the first cascade
used for ray marching must cover camera.
2.8.5 Unwarping
The final stage of the algorithm transforms the interpolated radiance stored in
the ptex2DEpipolarInscattering texture from epipolar to rectangular space. We
perform this by finding the two closest epipolar lines and projecting the sample
onto them. Then we use Gather() instruction to fetch camera-space z coordinates
from tex2DEpipolarCamSpaceZ texture for the two closest samples on each line and
compute bilateral weights by comparing them with z coordinate of the target
screen pixel (loaded from tex2DCamSpaceZ ). Using these bilateral weights, we
tweak filtering locations to obtain weighted sums of in-scattering values on each
epipolar line using Sample() instruction.
Search WWH ::




Custom Search