Graphics Reference
In-Depth Information
Given our model of surfaces, all light that passes through a surface to reach the
camera is, by definition, indirect illumination. In other words, we can still render a
single surface at each screen-space point. We just allow some light to scatter from
behind the surface to in front of it. For a material like green glass, the scattering
may “color” the outgoing light by transmitting some frequencies more than others.
Transmission of many kinds can naturally be represented by the BSDF models
that we've already discussed. Yet those models are too computationally expensive
for current real-time rendering systems. Just as was the case for scattering and
surface models, it is common to intentionally introduce both approximations and
a more complicated model for transmission to gain both expressive control
and improved performance. The common approximation to translucency phenom-
ena is to render individual surfaces in back-to-front order and then compose them
by blending, a process in which the various colors are combined with weights.
The blending functions are arbitrary operators that we seek to employ to create
phenomena that resemble those arising from translucency. In general, this model
forgoes diffusion and refraction effects in order to operate in parallel at each pixel,
although it is certainly possible to include those effects via screen-space sampling
(e.g., [Wym05]) or simply using a ray-tracing algorithm. Most graphics APIs
include entry points for controlling the blending operation applied as each sur-
face is rendered. For example, in OpenGL these functions are glBlendFunc and
glBlendEquation . We give examples of applying these in specific contexts below.
There are multiple distinct causes for translucency. Distinguishing among
them is important for both artistic control and physical accuracy of rendering
(either of which may not be important in a particular application). Because all
reduce to some kind of blending, there is a risk of conflating them in implementa-
tion. The human visual system is sensitive to the presence of translucency but not
always to the cause of it, which means that this sort of error can go unnoticed for
some time. However, it often leads to unsatisfying results in the long run because
one loses independent control over different phenomena. Some symptoms of such
errors are overbright pixels where objects overlap, strangely absent or miscolored
shadows, and pixels with the wrong hue.
To help make clear how blending can correctly model various phenomena,
in this section we give specific examples of applying a blending control similar
to OpenGL's glBlendFunc . The complete specification of OpenGL blending is
beyond what is required here, changes with API version, and is tailored to the
details of OpenGL and current GPU architecture. To separate the common concept
from these specifics, we define a specific blending function that uses only a subset
of the functionality.
If you are already familiar with OpenGL and “alpha,” then please read this
section with extra care, since it may look deceptively familiar. We seek to tease
apart distinct physical ideas that you may have previously seen combined by a
single implementation. The following text extends a synopsis originally prepared
by McGuire and Enderton [ME11].
14.10.1 Blending
Assume that a destination sample (e.g., one pixel of an accumulation-buffer
image; see Chapter 36) is to be updated by the contribution of some new source
sample. These samples may be chosen by rasterization, ray tracing, or any other
sampling method, and they correspond to a specific single location in screen space.
 
 
Search WWH ::




Custom Search