Game Development Reference
In-Depth Information
The type of encoding we use is also called the color depth . Images we create and store on disk
or in memory have a defined color depth, and so do the framebuffers of the actual graphics
hardware and the display itself. Today's displays usually have a default color depth of 24 bit, and
they can be configured to use less in some cases. The framebuffer of the graphics hardware is
also rather flexible, and it can use many different color depths. Our own images can, of course,
also have any color depth we like.
Note There are a lot more ways to encode per-pixel color information. Apart from RGB colors, we
could also have grayscale pixels, which only have a single component. As those are not used a lot,
we'll ignore them at this point.
Image Formats and Compression
At some point in our game development process, our artist will provide us with images that were
created with graphics software like Gimp, Paint.NET, or Photoshop. These images can be stored
in a variety of formats on disk. Why is there a need for these formats in the first place? Can't we
just store the raster as a blob of bytes on disk?
Well, we could, but let's check how much memory that would take up. Say that we want the best
quality, so we choose to encode our pixels in RGB888 at 24 bits per pixel. The image would be
1,024 × 1,024 pixels in size. That's 3MB for a single puny image alone! Using RGB565, we can
get that down to roughly 2MB.
As in the case of audio, there's been a lot of research on how to reduce the memory needed
to store an image. As usual, compression algorithms are employed, specifically tailored for the
needs of storing images and keeping as much of the original color information as possible. The
two most popular formats are JPEG and PNG. JPEG is a lossy format. This means that some of
the original information is thrown away in the process of compression. PNG is a lossless format,
and it will reproduce an image that's 100 percent true to the original. Lossy formats usually
exhibit better compression characteristics and take up less space on disk. We can therefore
choose what format to use depending on the disk memory constraints.
Similar to sound effects, we have to decompress an image fully when we load it into memory.
So, even if your image is 20KB compressed on disk, you still need the full width times height
times color depth storage space in RAM.
Once loaded and decompressed, the image will be available in the form of an array of pixel
colors in exactly the same way the framebuffer is laid out in VRAM. The only differences are that
the pixels are located in normal RAM and that the color depth might differ from the framebuffer's
color depth. A loaded image also has a coordinate system like the framebuffer, with the origin in
its top-left corner, the x axis pointing to the right, and the y axis pointing downward.
Once an image is loaded, we can draw it in RAM to the framebuffer simply by transferring the
pixel colors from the image to appropriate locations in the framebuffer. We don't do this by hand;
instead, we use an API that provides that functionality.
 
Search WWH ::




Custom Search