Information Technology Reference
In-Depth Information
This analysis also illustrates why interlaced television systems must have horizontal raster lines. This is because in
real life, horizontal motion is more common than vertical. It is easy to calculate the vertical image motion velocity
needed to obtain the half-bandwidth speed of interlace, because it amounts to one raster line per field. In 525/60
(NTSC) there are about 500 active lines, so motion as slow as one picture height in 8 seconds will halve the
dynamic resolution. In 625/50 (PAL) there are about 600 lines, so the half-bandwidth speed falls to one picture
height in 12 seconds. This is why NTSC, with fewer lines and lower bandwidth, doesn't look as soft as it should
compared to PAL, because it actually has better dynamic resolution.
The situation deteriorates rapidly if an attempt is made to use interlaced scanning in systems with a lot of lines. In
1250/50, the resolution is halved at a vertical speed of just one picture height in 24 seconds. In other words on real
moving video a 1250/50 interlaced system has the same dynamic resolution as a 625/50 progressive system. By
the same argument a 1080 I system has the same performance as a 480 P system.
Interlaced signals are not separable and so processes which are straightforward in progressively scanned systems
become more complex in interlaced systems. Compression systems should not be cascaded indiscriminately,
especially if they are different. As digital compression techniques based on transforms are now available, it makes
no sense to use an interlaced, i.e. compressed, video signal as an input.
Interlaced signals are harder for MPEG to compress. [ 2 ] The confusion of temporal and spatial information makes
accurate motion estimation more difficult and this reflects in a higher bit rate being required for a given quality. In
short, how can a motion estimator accurately measure motion from one field to another when differences between
the fields can equally be due to motion, vertical detail or vertical aliasing?
Computer-generated images and film are not interlaced, but consist of discrete frames spaced on a time axis. As
digital technology is bringing computers and television closer the use of interlaced transmission is an embarrassing
source of incompatibility. The future will bring image- delivery systems based on computer technology and
oversampling cameras and displays which can operate at resolutions much closer to the theoretical limits.
Interlace was the best that could be managed with thermionic valve technology sixty years ago, and we should
respect the achievement of its developers at a time when things were so much harder. However, we must also
recognize that the context in which interlace made sense has disappeared.
[ 2 ] Uyttendaele, A., Observations on scanning formats. Presented at HDTV '97 Montreux (June 1997)
5.7 Spatial and temporal redundancy in MPEG
Chapter 1 introduced these concepts in a general sense and now they will be treated with specific reference to
MPEG. Figure 5.18(a) shows that spatial redundancy is redundancy within a single picture or object, for example
repeated pixel values in a large area of blue sky. Temporal redundancy (b) exists between successive pictures or
objects.
In MPEG, where temporal compression is used, the current picture/ object is not sent in its entirety; instead the
difference between the current picture/object and the previous one is sent. The decoder already has the previous
picture/object, and so it can add the difference, or residual image, to make the current picture/object. A residual
image is created by subtracting every pixel in one picture/object from the corresponding pixel in another. This is
trivially easy when pictures are restricted to progressive scan, as in MPEG-1, but MPEG-2 had to develop greater
complexity (continued in MPEG-4) so that this can also be done with interlaced pictures. The handling of interlace
in MPEG will be detailed later.
 
Search WWH ::




Custom Search