Digital Display Interface Standards Part 3

The Apple Display Connector

Some mention should also be made of the Apple Display Connector (ADC), although this is a proprietary design used (to date) only in Apple Computer Corp. systems. In many ways, the ADC resembles both the VESA Plug & Display connector (in the “P&D-A/D” form) and the Digital Visual Interface standard. Like them, it is also based on the Molex “Microcross” connector family, and physically resembles a P&D-A/D connector with a slightly modified shell shape. Also like the P&D and DVI standards, ADC supports both analog and digital outputs in a single physical connector, and again uses the “TMDS” electrical interface standard. As in the DVI connector, up to two TMDS data channels (comprising three data pairs each) are supported, and the ADC again relies on the VESA DDC and EDID standards for display identification and control.

The Apple Display Connector and its pinout.

Figure 10-10 The Apple Display Connector and its pinout.

In addition to the analog video, TMDS, and DDC interfaces supported by DVI, the ADC connector adds a power supply (two pins carrying +28 VDC, along with two dedicated return pins) and the USB interface. There is also a “soft power” signal (pin 13), which can be used to place the monitor into a low-power mode (and thereby providing Apple monitors with a power-management system that is independent of the PC-standard VESA DPMS). The pinout for the ADC is shown in Figure 10-10.


Digital Television

As mentioned at the beginning of the topic, the development of “digital” television was to a great extent driven by the development of the computer industry, the opposite of the course of analog video interfaces. In fact, to this point there is still not a widespread, consumer-level digital interface standard for television; the first such may come through the consumer industry’s adoption of DVI, as mentioned above. Digital television began first as a production or broadcast studio technology, permitting a wider range of storage, editing, and effects options than had been available with the earlier analog-only systems. As these applications did not involve the development of significant new interface standards specifically oriented toward displays, they are beyond the scope of this work and will not be examined in detail here.

From the standpoint of the requirements on the video interface itself, television in either analog or digital form generally represents a less-demanding application than does computer video, solely due to the much lower data rates required. However, getting “TV” and “computer” signals to co-exist in a single system can be a challenging problem, due to a number of factors. For one thing, the data rates required for digital television can actually be below the limits of many computer-oriented interfaces. As an example, the pixel clocks normally used for standard-definition television (usually represented using either 720 x 480 or 720 x 576 image formats, or similar) fall under the typical lower limit for the digital interfaces discussed above, if transmitted in their usual interlaced form. (TMDS, for instance, typically has a lower pixel clock limit of 25 MHz.) In addition, the different color encoding methods used for television, along with the need to carry synchronized supplemental data such as audio, further complicates the compatibility issue. Finally, while most computer graphics systems are designed around the assumption of “square” pixels (equal numbers of sample per unit distance in both horizontal and vertical directions), this is not the case in most digital television standards.

General-Purpose Digital Interfaces and Video

While not in general used as display interfaces per se, two popular digital interface standards were designed with the transmission of digital video in mind, and deserve some mention here. They have not to date seen widespread use as display connections, but at least have some potential here, especially in consumer-entertainment applications.

The Universal Serial Bus, or USB, was first introduced by a consortium of seven companies (Intel, IBM, NEC, Compaq, Digital Equipment Corp., Microsoft, and Northern Telecom ) in 1995. It was intended as a general-purpose, low-to-medium speed, low-cost connection for desktop PC devices and other applications requiring only a short-distance interconnect. USB 1.0 defined a very flexible, “self-configuring” desktop connection system, with two levels of performance: a 1.5 Mbps link for keyboards and other low-speed devices, and a higher-performance 12 Mbps link with the potential for supporting high-quality digital audio and even some basic digital video devices. The USB connector/cable system uses just four wires: a +5 V power connection and its association ground, plus a bidirectional differential pair for the data signals. The data transmission format is specified such that the serial data stream is “self-clocking”, i.e., the timing information required to properly recover the data may be derived from the data stream itself.

A single USB host controller can support up to 127 peripheral devices simultaneously, although it must allocate the available channel capacity among these. Typically, practical USB installations will have perhaps a half-dozen devices on the interface at once, especially if any of these have relatively high data-rate requirements, such as digital audio units or a video camera.

The USB 1.0 specification has so far seen most of its acceptance in the expected markets; human-input devices for PCs, such as keyboards, mice, trackballs, etc. – and significantly for low-to-medium resolution video devices such as simple cameras. However, the recent development of a much more powerful version of the standard may increase its acceptance in the video/display areas. USB 2.0, in its initial release, defined performance levels up to 480 Mbps, more than sufficient for the support of standard compressed HDTV data and even the transmission of uncompressed video at “standard definition” levels. USB was defined from its inception to permit the transmission of “isochronous” data, meaning types such as audio or video in which the timing of each data packet within the overall stream must be maintained for proper recovery at the receiver.

At this level of performance, USB 2.0 may be a serious competitor for the other widely used general-purpose digital interface, the IEEE-1394 standard. The “1394” system is also often referred to as “FireWire™,” although properly speaking that name should be used only for the implementations of the system by Apple Computer, which originated the technology in 1986. IEEE-1394 is conceptually similar to the USB system – a point-to-point general purpose interconnect using a small, simple connector – but supported much higher data rates at its first introduction. Like USB, the 1394 interface supports isochronous data transmission, and so has been widely accepted in digital audio and video applications. The standard “1394” connector has six contacts – one pair for power and ground, as in USB (although typically using higher voltages) – and two pairs which make up the data channel.

The IEEE-1394/”FireWire” interface has been introduced on some products as a consumer-video connection, although not as a display interface per se. In its original form, this standard defined several levels of performance, up to a maximum of 400 Mbits/s. This is more than sufficient for standard televi-sion-quality video (roughly 640 x 480 to 720 x 576 pixels at 60 fields/s, 2:1 interlaced), but not for typical computer-display video formats and timings. The “1394” interface is very likely to see increases in capacity, to as much as 3.2 Gbits/s, but even this is still somewhat low for use with today’s “high-resolution” displays and timings. Both IEEE-1394 and USB 2.0, therefore, may become widely used for consumer and even professional video-editing and similar applications, but neither is likely to see any serious use as a high-resolution display interface.

Future Directions for Digital Display Interfaces

To date, digital interfaces as used for display devices themselves (as opposed to the more general usage of digital techniques for video, as in the television industry) have more or less simply duplicated the functions of the earlier analog systems. In the computer industry, for example, analog RGB video connections have begun to give way to digital, but these new interfaces still provide video information in parallel RGB form, with regular refreshes of the entire display. The encoding of the information has changed form, but nothing else in the operation of the display system.

In future standards, including several that are currently under development, this situation is expected to change. With the image information remaining in digital form from generation through to the input of the display, new models of operation become possible. Most of these rely on the assumption that the display itself will, in an “all-digital” world, contain a frame buffer, a means of storing at least one full frame of video information. This is actually quite common today even in many analog-input, non-CRT displays, as it is required for frame-rate conversion.

Frame storage within the display enables much more efficient usage of the available interface capacity. First, there is no longer any need for the image data to be updated at the refresh rate required for the display technology being used. CRT displays, for example, will generally require a refresh rate of 75-85 Hz to appear “flicker-free” to most users, but providing this frame rate over a product-level (i.e., between physically separate products, as opposed to the interface which might exist within a monitor to the display device itself) can be extremely challenging, especially over long distances. If a frame buffer is placed into the monitor, between the external interface and the display device itself, frame-rate conversion may be performed within the monitor, permitting the display to be refreshed at a much higher rate than is now required on the interface. Further, since the display timing is now decoupled to a large degree from the timing of the incoming display data, the external interface need not lose capacity to “overhead” losses such as the blanking intervals (which represent idle time for the interface, using the traditional model). (Frame-rate conversion is already used in many non-CRT displays, such as LCD-based monitors, but in the other direction – to convert the wide range of refresh rates used in “CRT” video to the relatively narrow range usable by most LCD panels.) The data rate required on the interface is now determined solely by the pixel format and the frame rate needed for acceptable motion rendition.

Further improvements in the efficiency of the interface may be obtained by realizing that the typical video transmission contains a huge amount of redundant information. If the image being displayed, for example, is text being typed on a plain white background – very common in computer applications – repeatedly sending the information representing any part of the image but the new text in each frame is a waste of capacity. Again assuming that the display contains sufficient frame storage and “intelligence,” it will be far more efficient to permit conditional update of the display. Using this method, only those portions of the image which have changed from frame to frame need to be transmitted, greatly reducing the data rates required of the interface. (Note, however, that if full-screen, full-motion video is possible, the peak capacity required to handle this must still be available – although only at the rate required for convincing rendition of the motion, as noted above.) As with any reduction in information redundancy – as was previously discussed relative to data compression techniques – the possibility of errors in the transmitted data becoming a problem for the user is increased. However, this can be addressed by including some form of error detection and/or correction into the system definition, without significantly affecting the improvement in efficiency.

“Packet” video. In a digital transmission system, video data may be “packetized” into blocks of a predefined format. In this hypothetical example, data packets have been defined which include the address of the intended recipient device, commands for that device, identification of the type of data (if any) the packet carries, and the data itself (which may be of variable length). Such a system would permit the addressing of multiple display devices over a single physical connection, and even the transmission of data types besides video - such as text or digital audio - with each type being properly routed only to devices capable of handling it.

Figure 10-11 “Packet” video. In a digital transmission system, video data may be “packetized” into blocks of a predefined format. In this hypothetical example, data packets have been defined which include the address of the intended recipient device, commands for that device, identification of the type of data (if any) the packet carries, and the data itself (which may be of variable length). Such a system would permit the addressing of multiple display devices over a single physical connection, and even the transmission of data types besides video – such as text or digital audio – with each type being properly routed only to devices capable of handling it.

The ability to make more efficient use of available interface capacity can be exploited in several ways. First, and probably most obviously, higher pixel counts – “higher resolution” formats – can be accommodated than would otherwise be the case. (0r, conversely, a given format can be supported by a lower-rate interface.) However, by adopting techniques from common digital-networking practice, new capabilities may be introduced that were never possible in previous display interfaces. By “packetizing” the data – sending bursts of information in a predefined format (Figure 10-11) – and by defining portions of these packets as containing address and other information besides the image data itself, several new modes of operation become possible. First, a single physical and electrical channel could be used to carry multiple types of information; besides the video data itself, audio and other supplemental data (such as teletext) can be carried simply by permitting each to be uniquely identified in the packet “header.” With the packet also providing address information, multiple displays could be connected to a single host output. This would rely on sophisticated display identification and control systems, building on existing standards,to communicate the unique capabilities of each display in the system to the host. The ability to support multiple separate displays could also be extended to support arrays of physically separate display devices which are to be viewed as providing a single image – a “tiled” display system, as in Figure 10-12.

Such a “packet video” system is currently under development by the Video Electronics Standards Association (VESA), based on a system originally developed by Sharp Electron-ics, Hitachi, Toshiba, and IBM Japan. The Digital Packet Video Link, or “DPVL,” standard is expected to provide all of the functionality described above, and is currently planned to be released in two stages. The first, “DPVL-Light” may be published as a standard by late 2002 or early 2003, and will support some of the more basic functionality in a single-display version of the system. This first version will be capable of being added to existing systems with a minimum of hardware changes. Later, the full DPVL standard will enable full packet-video functionality in new designs.

A tiled display system. In this type of display system, multiple separate display devices are physically arranged so as to be viewed as a single image. Ideally, the borders between the individual screens are zero-width, or at least narrow enough so as to be invisible to the viewer. Such a system is difficult to manage with conventional interfaces and separate image sources, but becomes almost trivially simple when using a packetized data transmission system.

Figure 10-12 A tiled display system. In this type of display system, multiple separate display devices are physically arranged so as to be viewed as a single image. Ideally, the borders between the individual screens are zero-width, or at least narrow enough so as to be invisible to the viewer. Such a system is difficult to manage with conventional interfaces and separate image sources, but becomes almost trivially simple when using a packetized data transmission system.

DPVL and similar packet-video systems will not necessarily require new physical and electrical interface standards, at least initially. At first, they may be expected to use existing channels, such as the TMDS interface and the DVI physical connection, and could be viewed simply as a new command/data protocol layer. (Note that many of the electrical interfaces presented in this topic, such as LVDS, TMDS, etc., can be used as general-purpose digital channels, despite being primary used in display applications at present.) Eventually, new physical and electrical standards may be required for a system which is truly optimized for packet-video transmission. At the very least, the display ID and control channel will likely need to be improved from current standards, to permit better support for multiple displays. A higher-speed “back channel,” capable of carrying greater amounts of information from the display to the host (the current digital display interfaces are basically unidirectional) may also be required. Possible solutions for all of these currently exist, however, and so the development and acceptance of packet video as a standard display interface method will again be limited primarily by the difficulty and costs of making the transition from current standards.

Next post:

Previous post: