HTML and CSS Reference
In-Depth Information
Multipass
Multipass encoding , also known as two-pass or even three-pass , is a technique for encoding and transcoding video into
another format using multiple passes to maintain the best quality. Basically, the video encoder/transcoder initially
examines the video before it applies any compression: that's one-pass. Next, the subsequent video passes happen two
or three times from start to finish, and every frame in between, to apply the compression after the first pass has given
some guidance to the encoder on how to do so. While examining the file, the encoder creates information about the
source video and writes that information to a log file; once that information is written, the encoder can then look up
the information and determine the optimal way to adjust the video quality within the predetermined limits the user
has set for the process.
Multipass encoding is used only in variable bitrate encoding (VBRs) jobs, since constant bitrate (CBR) encoding
doesn't offer any “bend or give” for the encoder to regulate the available bitrate for each frame. I like to think of this
process as the difference between a “talking head” video and an “action sequence.” The talking head video typically
has little movement, so bitrate can be consistent across the frames, and typically a one-pass job will suffice. An action
sequence needs to adjust its available bitrate as the frames in the scene become more complex with different color
values, blends, and/or heavy motion blurs. This sequence will typically need a two- or even three-pass to get the
desired quality and size within the specified video bitrate. This multipass technique creates better overall quality for
variable scene differences that some videos may have.
Bitrate
Bitrate is typically one of those predetermined user settings developers adjust when transcoding video for the
Web. Bitrate is really the amount of information stored in the video, and in most cases, the higher the bitrate, the
sharper the video quality is. This is of course not true if you take a heavily compressed video asset like an FLV and
transcode it to an uncompressed codec. The compression algorithm will write more data to the overall video without
increasing the picture fidelity, but it's essentially overkill because the picture information was already lost prior
to the compression job. This practice will give you a significantly higher bitrate in a video that has already been
compromised. Bitrate plays a huge factor in the delivery of HTML5 video, as I'll discuss later with adaptive bitrate
streaming.
Deinterlace
In traditional broadcast television, the moving picture was transported from a station head-end in an interlaced
picture format. This means the picture would actually be made up of multiple individual scan lines where every two or
three frames of video would be a blend of the previous and next frame, thus creating an interlaced image. With newer
televisions and computer monitor technologies, the video picture on the Web is transported in a progressive matter,
which means every frame is an individual picture delivered to the screen at one time. Confusing? Figure 7-1 explains.
By looking at the image on the left, you can see that there is a blurring or ghosting effect. This effect is the interlaced
picture. On the right, you see the image deinterlaced , or progressive, which creates no blurring effect, because the
image is full-frame for one second. For delivery on the Web, it's best to use the progressive approach.
 
Search WWH ::




Custom Search