The Accumulation Buffer (The Framebuffer) (OpenGL Programming) Part 1

Advanced

The accumulation buffer can be used for such things as scene antialiasing, motion blur, simulating photographic depth of field, and calculating the soft shadows that result from multiple light sources. Other techniques are possible, especially in combination with some of the other buffers. (See The Accumulation Buffer: Hardware Support for High-Quality Rendering by Paul Haeberli and Kurt Akeley [SIGGRAPH 1990 Proceedings, pp. 309-318] for more information about uses for the accumulation buffer.)

OpenGL graphics operations don’t write directly into the accumulation buffer. Typically, a series of images is generated in one of the standard color buffers, and these images are accumulated, one at a time, into the accumulation buffer. When the accumulation is finished, the result is copied back into a color buffer for viewing. To reduce rounding errors, the accumulation buffer may have higher precision (more bits per color) than the standard color buffers. Rendering a scene several times obviously takes longer than rendering it once, but the result is higher quality. You can decide what trade-off between quality and rendering time is appropriate for your application.

You can use the accumulation buffer the same way a photographer can use film for multiple exposures. A photographer typically creates a multiple exposure by taking several pictures of the same scene without advancing the film. If anything in the scene moves, that object appears blurred. Not surprisingly, a computer can do more with an image than a photographer can do with a camera. For example, a computer has exquisite control over the viewpoint, but a photographer can’t shake a camera a predictable and controlled amount.


void glAccum(GLenum op, GLfloat value);

Controls the accumulation buffer. The op parameter selects the operation, and value is a number to be used in that operation. The possible operations are GL_ACCUM, GLJLOAD, GL_RETURN, GL_ADD, and GL_MULT:

• GL_ACCUM reads each pixel from the buffer currently selected for reading with glReadBuffer(), multiplies the R, G, B, and alpha values by value, and adds the resulting values to the accumulation buffer.

•    GL_LOAD is the same as GL_ACCUM, except that the values replace those in the accumulation buffer, rather than being added to them.

•    GL_RETURN takes values from the accumulation buffer, multiplies them by value, and places the results in the color buffer(s) enabled for writing.

•    GL_ADD and GL_MULT simply add and multiply, respectively, the value of each pixel in the accumulation buffer to or by value and then return it to the accumulation buffer. For GL_MULT, value is clamped to be in the range [-1.0, 1.0]. For GL_ADD, no clamping occurs.

Scene Antialiasing

To perform scene antialiasing, first clear the accumulation buffer and enable the front buffer for reading and writing. Then loop several (say, n) times through code that jitters and draws the image (jittering is moving the image to a slightly different position), accumulating the data with

tmp1cfe188_thumb

and finally calling

tmp1cfe189_thumb

Note that this method is a bit faster if, on the first pass through the loop, GL_LOAD is used and clearing the accumulation buffer is omitted. (See Table 10-5 for possible jittering values.) With this code, the image is drawn n times before the final image is drawn. If you want to avoid showing the user the intermediate images, draw into a color buffer that’s not displayed, accumulate from that buffer, and use the GL_RETURN call to draw into a displayed buffer (or into a back buffer that you subsequently swap to the front).

You could instead present a user interface that shows the viewed image improving as each additional piece is accumulated and that allows the user to halt the process when the image is good enough. To accomplish this, in the loop that draws successive images, call glAccum() with GL_RETURN after each accumulation, using 16.0/1.0, 16.0/2.0, 16.0/3.0, … as the second argument. With this technique, after one pass, 1/16 of the final image is shown, after two passes, 2/16 is shown, and so on. After the GL_RETURN, the code should check to see if the user wants to interrupt the process. This interface is slightly slower, since the resultant image must be copied in after each pass.

To decide what n should be, you need to trade off speed (the more times you draw the scene, the longer it takes to obtain the final image) and quality (the more times you draw the scene, the smoother it gets, until you make maximum use of the accumulation buffer’s resolution). Plate 22 and Plate 23 show improvements made using scene antialiasing.

Example 10-4 defines two routines for jittering that you might find useful: accPerspective() and accFrustum(). The routine accPerspective() is used in place of gluPerspective(), and the first four parameters of both routines are the same. To jitter the viewing frustum for scene antialiasing, pass the x and y jitter values (of less than 1 pixel) to the fifth and sixth parameters of accPerspective(). Also, pass 0.0 for the seventh and eighth parameters to accPerspective() and a nonzero value for the ninth parameter (to prevent division by zero inside accPerspective()). These last three parameters are used for depth-of-field effects, which are described later in this topic.

Example 10-4 Routines for Jittering the Viewing Volume: accpersp.c

Routines for Jittering the Viewing Volume: accpersp.c

 

 

 

 

 

Routines for Jittering the Viewing Volume: accpersp.c

Example 10-5 uses the two routines in Example 10-4 to perform scene antialiasing.

Example 10-5 Scene Antialiasing: accpersp.c

Scene Antialiasing: accpersp.c

 

 

 

Scene Antialiasing: accpersp.c

 

 

 

Scene Antialiasing: accpersp.c

You don’t have to use a perspective projection to perform scene antialiasing. You can antialias a scene with orthographic projection simply by using glTranslate*() to jitter the scene. Keep in mind that glTranslate*() operates in world coordinates, but you want the apparent motion of the scene to be less than 1 pixel, measured in screen coordinates. Thus, you must reverse the world-coordinate mapping by calculating the jittering translation values, using its width or height in world coordinates divided by its viewport size. Then multiply that world-coordinate value by the amount of jitter to determine how much the scene should be moved in world coordinates to get a predictable jitter of less than 1 pixel. Example 10-6 shows how the dis-play() and reshape() routines might look with a world-coordinate width and height of 4.5.

Example 10-6 Jittering with an Orthographic Projection: accanti.c

Jittering with an Orthographic Projection: accanti.c

Next post:

Previous post: