Graphics Reference
In-Depth Information
and height equal, and then display the resultant image on a 200
×
400 window,
horizontal distances will appear stretched compared to vertical ones.
To get a nondistorted on-screen view, assuming that the screen display has
square pixels, we need the aspect ratio of the viewport and the image to be the
same. Some camera-specification systems let the user specify not the width and
height, but instead any two of width, height, and aspect ratio. It's also possible to
make the specification of a viewport accept any two of these, making it easier to
get cameras and viewports that match. (It's easier to specify width and aspect ratio
for both than to specify width and height for both, because in the latter case you'll
have to choose the second height to match the aspect ratio established by the first.)
The three parameters—width, height, and aspect ratio—are not independent;
if the user specifies all three, it should be treated as an error.
Note that for a perspective camera, the ratio of the vertical and horizontal field-
of-view angles is not the aspect ratio of the view rectangle (see Exercise 13.1).
13.9 Discussion and Further Reading
The camera model introduced in this chapter is very simple. It's a “camera” suited
to the “geometric optics” view of the world, in which light travels along infinites-
imally thin rays, etc. Real-world cameras are more complex, the main complexity
being that they have lenses (or more often, multiple lenses stacked up to make
a lens assembly). These lenses serve to focus light on the image plane, and to
gather more light than one can get from a pinhole camera, thus allowing them to
produce brighter images even in low-light situations. Since we're working with
virtual imagery anyhow, brightness isn't a big problem: We can simply scale up
all the values stored in an image array. Nonetheless, simulating the effects of real-
world lenses can add to the visual realism of a rendered image. For one thing,
in real-world cameras, there's often a small range of distances from the camera
where objects are in focus; outside this range, things appear blurry. This happens
with our eyes as well: When you focus on your computer screen, for instance, the
rim of your eyeglasses appears as a blur to you. Photographs made with lenses
with a narrow depth of field give the feeling of being like what we see with our
own narrow-depth-of-field eyes.
To simulate the effects of cameras with lenses in them, we must, for each pixel
we want to render, consider all the ways that light from the scene can arrive at that
pixel, that is, consider rays of light passing through each point of the surface of
the lens. Since there are infinitely many, this is impractical. On the other hand,
by sampling many rays per pixel, we can approximate lens effects surprisingly
well. And depending on the detail of the lens model (Does it include chromatic
aberration? Does it include nonsphericity?) the simulation can be very realistic.
Cook's work on distribution ray tracing [CPC84] is the place to start if you
want to learn more about this.
There's a rather different approach we can take, based on phenomenology:
We can simply take polygons that need to be rendered and blur them somewhat,
with the amount of blur varying as a function of distance from the camera. This
can achieve a kind of basic depth-of-field effect even in a rasterizing renderer,
at very low cost. If, however, the scene contains long, thin polygons with one
end close to the camera and the other far away, the blurring will not be effective.
Such approaches are better suited for high-speed scenes in video games than for a
single, static rendering of a scene.
 
 
Search WWH ::




Custom Search