Image Processing Reference
Digital broadcasts should be recorded digitally if possible. Products that capture the
streamed MPEG-2 video are often only available to high-end users, and then the extrac-
tion of the video content is still technically challenging. Recording the transmitted bit
stream to a file is straightforward. However, this then needs to be de-multiplexed to
extract the program stream. Before you can use this in an editor, it then has to be separated
into audio and video streams (further de-multiplexing). A format conversion may also be
necessary, at which point there is a risk of gamma value corruption.
Most of the issues to do with video quality will relate to the bandwidth of the video
content. That equates to the resolving capability in the horizontal axis.
The Results Have to Look Good
The next few chapters will examine the effects of encoding different kinds of content to see
what artifacts are introduced. Cuts, wipes, dissolves, and so forth will all introduce dif-
ferent encoded bit-rate loadings and artifacts. Video content that has been created with
some knowledge of the compression consequences will produce a compressed output that
streams more evenly.
Content That Is Hard to Encode
Occasionally, you are going to find that some of your video content is difficult to encode.
Race cars, for example, move at a very high speed. If the camera shutter is open for
any length of time, there will be significant amounts of motion blur. Because the point of
view and attitude of the car changes from frame to frame, the number of reusable mac-
roblocks diminishes drastically. The background is blurring differently from the car when
you are shooting from the side of the track. Cameras mounted in the car are subject to all
sorts of movement, but the interior of the car would be stationary with respect to the cam-
era. This might actually compress better than the output from a track-side camera doing
lots of pans to follow the cars. A lot of new information has to be coded for almost every
frame. In Figure 31-3, the train has been motion-blurred on purpose to look like it is speed-
ing through the stations. This shows that the image contains some very rapidly moving
objects as well as some stationary ones. The boundaries of the two kinds of object are espe-
cially hard to code.
Moving seascapes and waterfalls are another example. There is a lot of motion
(although in the case of a waterfall, a very good simulation might be generated at a low
bit rate because the movement is cyclic and repeating). Moving images of waterfalls can
be looped without the viewer's noticing. The structured and object-based approach that
MPEG-4 provides might be used to good effect here if it is considered early enough in the
scene-building production workflow. Figure 31-4 shows two frames from a movie about
waterfalls. These are cyclic and repeating when you watch them; they would loop very
easily. The frame-to-frame changes are considerable from the compressionist's point of