Graphics Reference
In-Depth Information
Direct Volume Rendering
So far, we have been visualizing 3D volumetric data with reduced geometry—
3D points or 2D planes. What if we want to peer into the entire volume at once?
This is known as direct volume rendering . There are a number of ways to do this.
One of the most common is to create many parallel interpolated color cuting
planes and composite (blend) them back-to-front. This works well as long as
you keep two things in mind:
1. As the eye moves, the planes need to be reoriented to always be per-
pendicular to the viewing direction, so that you never see the sides of a
plane.
2. If you want OpenGL to do the compositing for you, you must draw the
planes scene-back-to-scene-front, relative to the eye position, regardless
of how the scene is oriented.
Instead of using that technique, we will describe a ray-casting approach,
because it's a more interesting use of shaders. Once again, we will use dummy
quadrilaterals, not because we want to display quadrilaterals, but because we
want to compute some display colors and need a place to put them. We posi-
tion six quadrilaterals, looking like a cube, all one unit away from the origin, to
become the faces on which we will display the resulting fragments.
So envision the process this way. The volume data is in a 3D texture,
which you can think of as being bounded by the six quadrilaterals. 1 You are sit-
ting on an arrow at one of the 3D fragments. Your task is to “fly” through the 3D
volume texture in a straight line, compositing colors as you go. You will paint
the final composited color onto the fragment at which you started your flight.
Starting at each fragment, we then need to choose a ray-casting direction.
We will start by choosing it in eye coordinates and will then convert it to tex-
ture coordinates, so that we can “fly” through the 3D texture. If we are using
an orthographic (parallel) projection, producing this direction is easy. Because
we are viewing the scene from the front, the direction will be (0, 0, -1) for all
fragments. If we are using a perspective projection, the tracing direction will
be a vector from the eye through the fragment being processed. We will use the
vertex shader to compute this vector for each vertex being processed, and then
let the rasterizer interpolate those vectors into each fragment.
1. This is only a loose analogy. The quadrilaterals, and thus the fragments, are in the 3D world coor-
dinate system. The volume data scalar values are in texture coordinates. We are going to force the
data volume inside the quadrilaterals with an equation that relates the quadrilaterals' [-1.,+1.] world
space to the texture coordinates' [0.,1.] space. Even though the quadrilaterals and the 3D volume
texture are in two different coordinate spaces, it is useful to think of them as being in the same space
with an equation that connects them.
Search WWH ::




Custom Search