HTML and CSS Reference
In-Depth Information
JPEG is the defacto image format of the Internet. It is popular because it produces images that are pretty tiny
compared to the other formats. That small footprint comes at the cost of precision, however, since JPEG is a lossy
format. As such, it tends to work well for photo-like images (such as our sample textures), but it can make a mess
of textures that require more precision, such as normal maps. JPEG also lacks an alpha channel, which makes it
inappropriate for anything that needs transparency. Still, if your texture can deal with the format restrictions, JPEG is
clearly the winner on the file size front.
WebP is the most interesting file format you probably can't use. On paper, it looks like the perfect format: it supports
both lossless and lossy compression, alpha channels, and typically provides better compression than even the
mighty JPEG! The singular downside is browser support: as of this writing, only Chrome and Opera support the
format. (Firefox has expressed interest in the format, but it does not support it at this time.) Depending on your target
audience, however, this may still be a viable option. If you are building a Chrome App, for example
( ), WebP would easily be the best format to use. You can find out more about
this up-and-coming format at .
For the gallery application, you don't need transparency and you're not concerned about artifacts caused by lossy
compression. Thus you'll use JPEGs as your default to take advantage of the small file size.
Memory Use
If the only thing that you had to worry about was bandwidth, the choice would be easy. You would go with PNG for
lossless or transparent textures, JPEG when you can afford to be lossy, or WEBP if your projects scope allows it. Done!
As mentioned earlier, however, bandwidth only covers half of the optimization story. The other half is how much
memory the image takes up in video memory.
The problem is that your GPU doesn't know how to decode a JPEG. Nor should it! Decoding an image format like
a JPEG is relatively slow, and your rendering speed would plummet if your GPU had to parse every frame of the file. As
a result, when you use any of the native browser image formats as the source of a WebGL texture, they are uploaded to
GPU memory fully decoded and uncompressed. This means that while your 1024×1024 JPEG may only take up 250KB
on your disk, it expands to a whopping 3MB in your GPU's memory! (1024 * 1024 * 24 bits-per-pixel = 3MB) In other
words, no matter what image format you use, when it reaches your video card, every image is the size of a BMP. At
those sizes, you can fill up your video memory pretty quickly.
There's also a secondary problem related to the large image size: the image can take a long time to upload. The
exact time varies by browser and hardware, but on my reasonably beefy 15" Retina MacBook uploading a 1024×1024
JPEG to the GPU via texImage2D using Chrome usually takes somewhere between 11 and 18ms, during which time
the rest of the JavaScript is blocked. You can see this for yourself by running the gallery application and opening up
the browser's JavaScript console. For the purposes of this demonstration, each texture upload is timed and output to
the development console, so you should see output like this:
Upload Texture: 16.233ms webgl-util.js:54
Upload Texture: 17.495ms webgl-util.js:54
Upload Texture: 14.923ms webgl-util.js:54
Upload Texture: 11.640ms webgl-util.js:54
Upload Texture: 11.645ms webgl-util.js:54
Upload Texture: 14.235ms webgl-util.js:54
Upload Texture: 15.599ms webgl-util.js:54
Upload Texture: 11.946ms webgl-util.js:54
Search WWH ::

Custom Search