Graphics Reference
In-Depth Information
probably rank the importance of those directions, at least for lights, and choose a
subset that is likely to minimize sampling error.
Inline Exercise 15.1: We don't expect you to have perfect answers to these,
but we want you to think about them now to help develop intuition for this
problem: What kind of errors could arise from sampling a finite number of
directions? What makes them errors? What might be good sampling strategies?
How do the notions of expected value and variance from statistics apply here?
What about statistical independence and bias?
Let's start by considering all possible directions for incoming light in pseu-
docode and then return to the ranking of discrete directions when we later need to
implement directional sampling concretely.
To consider the points and directions that affect the image, our program has to
look something like Listing 15.1.
Listing 15.1: High-level rendering structure.
v o from it to pixel center ( x , y ):
1
2
3
4
5
for each visible point P with direction
sum =0
for each incident light direction
v i at P :
sum += light scattered at P from
v i to
v o
pixel [ x , y ]= sum
15.2.2 Visible Points
Now we devise a strategy for representing points in the scene, finding those that
are visible and scattering the light incident on them to the camera.
For the scene representation, we'll work within some of the common rendering
approximations described in Chapter 14. None of these are so fundamental as to
prevent us from later replacing them with more accurate models.
Assume that we only need to model surfaces that form the boundaries of
objects. “Object” is a subjective term; a surface is technically the interface
between volumes with homogeneous physical properties. Some of these objects
are what everyday language recognizes as such, like a block of wood or the water
in a pool. Others are not what we are accustomed to considering as objects, such
as air or a vacuum.
We'll model these surfaces as triangle meshes. We ignore the surrounding
medium of air and assume that all the meshes are closed so that from the out-
side of an object one can never see the inside. This allows us to consider only
single-sided triangles. We choose the convention that the vertices of a triangular
face, seen from the outside of the object, are in counterclockwise order around the
face. To approximate the shading of a smooth surface using this triangle mesh,
we model the surface normal at a point on a triangle pointing in the direction of
the barycentric interpolation of prespecified normal vectors at its vertices. These
normals only affect shading, so silhouettes of objects will still appear polygonal.
Chapter 27 explores how surfaces scatter light in great detail. For simplicity,
we begin by assuming all surfaces scatter incoming light equally in all directions,
in a sense that we'll make precise presently. This kind of scattering is called Lam-
bertian, as you saw in Chapter 6, so we're rendering a Lambertian surface. The
 
 
 
Search WWH ::




Custom Search