Game Development Reference
In-Depth Information
1,280 video up 33 percent, and it will look reasonably good. The exception to this may
be HD or UHD (popularly termed 4K iTV) games targeted at iTVs; for these huge, 55-
to 75-inch (screen) scenarios, you would want to use the industry standard, true HD
1,920 × 1,080 resolution.
The next level of optimization would come in the number of frames used for
each second of video (or FPS ), assuming the actual number of seconds in the digital
video itself cannot be shortened. As mentioned earlier, this is known as the frame rate,
and instead of setting the video standard 30FPS frame rate, consider using a film
standard frame rate of 24FPS or even the multimedia standard of 20FPS . You may
even be able to use a 15FPS frame rate, half the video standard, depending on the
amount (and speed) of movement within the content. Note that 15FPS is half as much
data as 30FPS (a 100 percent reduction in data encoded). For some video content this
will play back (look) the same as 30FPS content. The only way to test this is to try
frame rate settings during the encoding process.
The next most optimal setting for obtaining a smaller data footprint would be the
bit rate that you set for a codec to try to achieve. Bit rate equates to the amount of
compression applied and thus sets a quality level for the digital video data. It is im-
portant to note that you could simply use 30FPS, 1,920 resolution HD video and speci-
fy a low bit-rate ceiling. If you do this, the results will not be as good-looking as if you
had experimented with using a lower frame rate and resolution, in conjunction with a
higher (quality) bit-rate setting. There is no set rule for this, as every digital video asset
contains completely unique data (from the codec's point of view).
The next most effective setting for obtaining a smaller data footprint is the number
of key frames that the codec uses to sample your digital video. Video codecs apply
compression by looking at each frame and then encoding only the changes, or offsets ,
over the next several frames so that the codec algorithm does not have to encode every
single frame in a video data stream. This is why a talking head video will encode better
than a video in which every pixel moves on every frame (such as video that uses fast
camera panning, or rapid field of view [FOV] zooming).
A key frame is a setting in a codec that forces that codec to take a fresh sampling
of your video data assets every so often. There is usually an auto setting for key
frames, which allows a codec to decide how many key frames to sample, as well as a
manual setting , which lets you specify a key frame sampling every so often, usually a
certain number of times per second or over the duration of the entire video (total
frames).
Most codecs usually have either a quality or a sharpness setting (a slider) that con-
trols the amount of blur applied to a video frame before compression. In case you are
not familiar with this trick, applying a slight blur to your image or video, which is usu-
Search WWH ::




Custom Search