Graphics Reference
In-Depth Information
When rendering into the light accumulation buffer we offset geometry a little
bit along normals to compensate for the low resolution of the buffer and to have
more visually pleasing results on the edges of the skin shaded geometry. Possible
offsets in the center of the skinned objects will be covered when we perform blur
in the next step.
1.4.2 Subsurface Scattering (Screen-Space Diffusion)
As a second step, we perform a separable Gaussian blur that simulates the scat-
tering inside the skin tissue. We use the alpha channel values to mask out the
regions without skin to avoid leaking of the light. We found that one blur pass
is enough to achieve visually pleasing results; however, multiple blur passes will
improve the quality of the final image.
Subsequently, results of the blurred light accumulation buffer will be sampled
in the final step of the skin shading to retrieve the diffuse component of the
light-affecting pixels.
In practice it is rare for skin to cover a significant portion of the screen. We
take advantage of that fact by generating bounding quad geometry and blurring
only inside. This allows us to save on the memory bandwidth, especially when
skin shaded characters are farther away.
1.4.3 Skin Layers
Many oine and online render systems model skin as a three-layer material with
an oily top, an epidermal, and a subdermal layer, all employing different light
absorption and scattering parameters. We, however, have opted for only a two-
layer model for performance reasons. In fact we present a “fake” three-layer model
to the artist because most seem more familiar with such setup (see Figure 1.2).
Then we internally convert parameters to suit our two-layer model. The mixing
of the layers is done in the last step of the skin shading.
1.4.4 Implementing a Physically Based Shading
As discussed earlier, we chose physically based microfacet distribution for our
specular reflectance. To achieve real-time performance on mobile GPUs we pre-
compute a single 2D lookup texture to capture the most intensive aspects of our
BRDF calculation. We pre-compute the lookup texture by evaluating our custom
BRDF before rendering of the scene starts.
Later, inside the pixel shader, we will address lookup texture using the angle
between normal and half-vector (
N · H
) on one dimension and the angle between
normal and view vector (
) on another dimension. We can store response to
varying roughness parameters in the mip-chain of the lookup texture.
N · V
Search WWH ::




Custom Search