Game Development Reference
In-Depth Information
Figure 10-9. Blending done wrong; the triangle in the back should shine through
The objects only consist of triangles so far, which is, of course, a bit simplistic. We'll revisit
blending in conjunction with the z-buffer again when we render more complex shapes. For now,
let's summarize how to handle blending in 3D:
1.
Render all opaque objects.
2.
Sort all transparent objects in increasing distance from the camera
(furthest to closest).
3.
Render all transparent objects in the sorted order, furthest to closest.
The sorting can be based on the object center's distance from the camera in most cases.
You'll run into problems if one of your objects is large and can span multiple objects. Without
advanced tricks, it is not possible to work around that issue. There are a couple of bulletproof
solutions that work great with the desktop variant of OpenGL, but they can't be implemented
on most Android devices due to their limited GPU functionality. Luckily, this is very rare, and you
can almost always stick to simple center-based sorting.
Z-buffer Precision and Z-fighting
It's always tempting to abuse the near and far clipping planes to show as much of our awesome
scene as possible. We've put a lot of effort into adding a ton of objects to our world, after all,
and that effort should be visible. The only problem with this is that the z-buffer has a limited
precision. On most Android devices, each depth value stored in the z-buffer has no more than
16 bits; that's 65,535 different depth values at most. Thus, instead of setting the near clipping
plane distance to 0.00001 and the far clipping plane distance to 1000000, we should stick
to more reasonable values. Otherwise, we'll soon find out what nice artifacts an improperly
configured view frustum can produce in combination with the z-buffer.
What is the problem? Imagine that we set our near and far clipping planes as just mentioned.
A pixel's depth value is more or less its distance from the near clipping plane—the closer it is,
the smaller its depth value. With a 16-bit depth buffer, we'd quantize the near-to-far-clipping-
plane depth value internally into 65,535 segments; each segment takes up 1,000,000 / 65,535
= 15 units in our world. If we choose our units to be meters, and we have objects of usual sizes
like 1×2×1 meters, all within the same segment, the z-buffer won't help us a lot because all the
pixels will get the same depth value.
 
Search WWH ::




Custom Search