Standards for Analog Video -Part II: The Personal Computer (Display Interfaces) Part 1

Introduction

For over 30 years, broadcast television represented essentially the only electronic display system, and display interface, in truly widespread use. Various forms of electronic displays were in use in many applications, notably in test and measurement equipment and in early computing devices, but these were for the most part “embedded” display applications in which the display device was an integral part of the product. Such uses do not require much in the way of standardized external display interfaces. When remote location of the display itself was required, most often the connection was made using either an interface custom-designed for that product, or via television standards.

Through the 1960s and 1970s, the growing importance of electronic computing began to change this. Earlier analog computers had made use of existing “instrumentation-style” output devices – such as oscilloscopes and X-Y pen plotters – but the digital computer’s ability to be programmed via something resembling “plain language” required a human interface more suited to text-based communications. At first, standard teletype units were adapted to this use. These were electromechanical devices that combined a keyboard and a printer with a simple electronic interface, capable of translating between these devices and a standard binary code for each character. (Direct descendants of this type of device are the modern character codes used to represent text information in digital form. Examples include the American Standard Code for Information Interchange (ASCII) and the various ISO character code definitions.


Teletype machines, while workable, had several significant disadvantages. Besides being large, slow, and noisy, they were greatly hampered by the need to print everything – both the text being entered by the operator, and any response from the computer – on paper. Editing text is particularly difficult with such a system, and any but the simplest interaction with the computer used a large amount of paper. The next logical step was to replace the printed page as the sole output device with an electronic display. As the CRT was the only practical option available, it became the heart of the first all-electronic computer terminals.

Still, these were little more than teletype machines with a “printer” that needed no paper. Input and output were still completely text-based, and the interface was essentially identical to that of the teletype – a digital connection, sometimes unique to the system in question, over which the operator and the computer communicated via character codes. (Computer printers, of course, continued to develop separately, but now as an output device used only when needed.)

Character-Generator Display Systems

The CRT terminal is of interest here, as it introduced the first use of character-generator text display. In this form of display, text is stored in the form of the ASCII or other code sets used by the terminal. The displayed image is produced by scanning through these codes in the order in which the text is to appear on the screen – usually in the form of lines running left to right and top to bottom – and the codes used as indices to a character read-only memory (ROM). This ROM stores the graphic representation of each character the terminal is capable of displaying. In this type of system (Figure 9-1), each character is generally produced within a fixed-size cell; the top line of each cell is read out, in order, for each character in a given line of text, and then the next line within each cell, and so forth. A line counter tracks the correct position vertically through the lines of text, as well as the appropriate line within each set of character cells. In this system, each visible line must be an integral number of character cell widths in length, and for simplicity, the blanking time and all subdivisions thereof are also counted in character widths. To this day, the use of the term “character” to denote the smallest increment of horizontal timing has been maintained.

The data from the character ROM is serialized, generally using a parallel-in, serial-out shift register, and could then be used (after amplification to the required level) to control the CRT electron beam. The same counters which track the line count within the displayed frame and the character count within each line are also used to produce the synchronization signals for the CRT, simply by starting and stopping these pulses at the appropriate count in each direction. Integrated, programmable control ICs to perform these functions were soon developed, requiring only the external ROM for character storage, a clock of the proper frequency to produce the desired timing, and a few other minor components.

Simple features such as underlining, inverse text display, and blinking were fairly simple to add to such a system. Blinking and/or inversion of the text (e.g., producing a black character within a white cell rather than vice-versa) are achieved simply by blocking or inverting the data from the character ROM, and gating this with a signal derived by dividing down the vertical sync signal to produce the desired blink rate. Underlining is easily achieved by forcing the video data line to the “on” state during the correct line of each character cell.

Simple color can also be achieved in such a system, by storing additional information with each character. For example, the character code itself might be stored as eight bits, plus three added bits, one for each of the primary colors. This permits each character to be as-signed any of eight colors, and the color information is simply gated by the character ROM data line to control the three beams of a color CRT.

Character-generator video system. The image of each possible character is stored in a read-only memory (ROM), and read out under the control of a line counter and the output of text memory (RAM), which contains the codes corresponding to the text to be displayed on the screen. In this example, each character occupies an 8 pixel by 14 line cell; each cell may be viewed as having space for the main body of each character (A), plus additional lines for descenders, line-to-line spacing, etc. (B). The characters are selected in the proper sequence by addressing the text memory via character and line counters. Not shown is the data path from the CPU to the text memory.

Figure 9-1 Character-generator video system. The image of each possible character is stored in a read-only memory (ROM), and read out under the control of a line counter and the output of text memory (RAM), which contains the codes corresponding to the text to be displayed on the screen. In this example, each character occupies an 8 pixel by 14 line cell; each cell may be viewed as having space for the main body of each character (A), plus additional lines for descenders, line-to-line spacing, etc. (B). The characters are selected in the proper sequence by addressing the text memory via character and line counters. Not shown is the data path from the CPU to the text memory.

Graphics

While the character-generator CRT terminal was a major improvement over the teletype machine, there was still no means for producing graphical output other than electromechanical systems such as pen plotters.

The logical development from the character-generator system, in terms of increasing the graphics capability, was to permit the images to be drawn in memory rather than using the permanent storage of a ROM.With the computer now able to create images in this manner, the concept of a pixel broadened from its meaning of simply a point sample of an existing image. Computer synthesis of imagery led to the “pixel” being thought of as simply the building block out of which images could be made. The concept of the pixel as literally a “little square of color,” while technically incorrect, has become very deep-seated in the minds of many working in the computer graphics field. This unfortunately leads to some misconceptions, and conflicts with the proper point-sample model. Despite this conceptual problem, this usage of “pixel” has become very pervasive, and is practically impossible to avoid in discussions of computer graphics and display-system issues.

Regardless of what we define as a “pixel,” however, the contents of the frame buffer must still be transferred in some way to the display device, and so we now move to consideration of the development of the display interface itself, as used in the computer industry.

Early Personal Computer Displays

The first personal computers had to rely either on existing terminal-type displays, communicating via standard serial or parallel digital interfaces, or on the one standard display device available – the television. With the PC market in its infancy, no standard computer monitor products yet existed. The original Apple II, Atari, and Commodore VIC PCs are all examples of early computers designed to use standard television receivers or video monitors as their primary display. A very few of the early PCs, notably the Commodore PET (1978), provided an integrated CRT display, although these were still generally based on existing television components.

But the limitations of such displays quickly became apparent. Character-based terminals were incapable of providing the graphics that PC users increasingly demanded, and more capable terminals were not an economical alternative for the home user. Televisions could not provide the resolution required for any but the simplest PC applications, especially when viewed from typical “desktop display” distances. Using television receivers as computer displays also required the use of TV-style color encoding, and the resulting loss of color resolution and overall quality. Display products specifically intended for computer use, and with them standardized monitor connections, were needed. These came with the introduction of the original IBM Personal Computer in 1981.

It should be noted at this point that many higher-end computers, such as those intended specifically for the scientific and engineering markets, retained the integrated-display model for some time. These more expensive products could afford displays specifically designed as high-resolution (for the time), high-quality imaging systems. Note, however, that no real “display interface” standardization had occurred in such systems. The display was directly connected to, and in such cases commonly in the same physical package as, the hardware that produced the images in the first place. When connections to external display devices were provided, they were either special, proprietary designs, or used existing standards from the television industry. The reliance on standards originally developed for television use, as in the case of the “lower-end” personal computer market, again was a major factor in shaping the signal standards and practices used in these systems. Such “scientific” desktop computer systems later developed into the “engineering workstation” market, which progressed separately and along a somewhat different path than the more common “personal computer” (or “PC”).

The IBM PC

The IBM PC (and the “clones” of this system which quickly followed) was among the first, and certainly was the most successful, personal computer system to use the “two-box” (separate monitor and CPU/system box) model with a display specifically designed for the computer. Rather than using a standard “television” video output, and a display which was essentially a repackaged TV set (or even a standard portable television), IBM offered a series of display products as part of the complete PC system. To drive these, several varieties of video cards, or “graphics adapters” in the terminology introduced with this system, were provided. This model quickly became the standard of the industry, and at least the connector introduced with one of the later systems remains the de facto standard analog video output for PCs to this day.

These original IBM designs were commonly referred to by a three- or four-letter abbreviation which always included “GA”, for “graphics adapter” or later “graphics array.” The first generation included the Monochrome Display Adapter (“MDA”, the one example which did not include “graphics” as it had nothing in the way of graphics capabilities), the Color Graphics Adapter (CGA), and the Enhanced Graphics Adapter (EGA). Later additions to the “GA” family included the Video Graphics Array, the Professional Graphics Adapter, the Extended Graphics Array (VGA, PGA, and XGA, respectively) and so forth. Today, only the “VGA” name continues in widespread use, at least in reference to a standard connector, although some of the others (notably VGA, SVGA, XGA, and SXGA) continue to be used in reference to display formats originally introduced with that hardware. (For example, “XGA” today almost always refers to the 1024 x 768 format, not to the original XGA hardware.)

MDA/Hercules

The original MDA adapter was a simple monochrome-only card, intended for use with a fixed-frequency display and providing what was effectively a 720 x 350 image format, although it was capable only of producing a text display using the ROM-based character generator technique described above. This system used a fixed character “cell” of 9 x 14 pixels, and so the image format produced can more properly be described as 25 lines of 80 characters each. (This “80 x 25” text format is a de facto standard for such displays.) The connection to the display was via a 9-pin D-subminiature connector, whose pinout is shown in Figure 9-2. Note that this might be considered a “digital” output, although if so it is of the very simplest variety. The primary video signal provided was a single TTL-level output, which simply switches the CRT’s beam on and off to create the characters (although a separate “intensity” output was also provided, which could be used to change the brightness on a charac-ter-by-character basis). The MDA output also provided separate TTL-level horizontal and vertical synchronization (“sync”) signals, a system which has been retained in PC standards to this day.

A similar video card of the same vintage was the “Hercules” graphics adapter, a name which is still heard in discussions of this early hardware. The Hercules card (named for the company which produced it) used essentially the same output as the IBM MDA, but a slightly different image format.

The MDA video output connector and pinout. This started the use of the 9-pin D-subminiature connector as a video output, which continued with the CGA and EGA designs.

Figure 9-2 The MDA video output connector and pinout. This started the use of the 9-pin D-subminiature connector as a video output, which continued with the CGA and EGA designs.

CGA and EGA

A step up from the MDA was the Color Graphics Adapter, or “CGA” card. This permitted the PC user to add a color display to the system, albeit one that could provide only four different colors simultaneously, and that only at a relatively low-quality display format of 320 x 200 pixels. CGA also provided the option of monochrome operation at 640 x 200, mimicking the MDA format but with a slightly smaller character cell. Again, the 9-pin D-subminiature connector type was used (Figure 9-3), with previously unused pins now providing the additional outputs required for color operation.

A further increase in capabilities could be had by upgrading to the Enhanced Graphics Adapter, or EGA. This supported 16 different colors simultaneously, with a format of 640 x 350 pixels, in both graphics and text modes. The EGA retained the 9-pin connector of the CGA and MDA types, but with a slightly different pinout (also listed in Figure 9-3) as needed to support the increased color capabilities.

The revised pinout of the 9-pin connector for the Color Graphics Adapter (CGA) and Enhanced Graphics Adapter (EGA) products.

Figure 9-3 The revised pinout of the 9-pin connector for the Color Graphics Adapter (CGA) and Enhanced Graphics Adapter (EGA) products.

VGA – The Video Graphics Array

With the introduction of the VGA hardware and software definitions by IBM in 1987, the stage was set for PC video and graphics systems to come into their own as useful tools for both the home and professional user. Later products would build on VGA by increasing the pixel counts supported, adding new features, etc., but the basic VGA interface standards remain to this day. (A separate, lower-capability system introduced at the same time as VGA -the “MultiColor Graphics Array” or “MCGA” – never achieved the widespread acceptance of VGA and was soon abandoned.)

The VGA video connector. Both the original pin assignments and those defined by the VESA Display Data Channel are shown. Note that the original pinout is now obsolete and almost never found in current use. Used by permission of VESA.

Figure 9-4 The VGA video connector. Both the original pin assignments and those defined by the VESA Display Data Channel are shown. Note that the original pinout is now obsolete and almost never found in current use. Used by permission of VESA.

Among the most significant contributions of the VGA definition were a new output connector standard and a new display format, both of which are still referred to as “VGA.”

The new connector kept the same physical dimensions as the 9-pin D-subminiature of the earlier designs, but placed 15 pins (in 3 rows of 5 pins each) within the connector shell (Figure 9-4). This is referred to as a “high-density” D-subminiature connector, and common names used for this design (in addition to simply “VGA”) include “15-HD” or simply “15-pin D-sub.” The new connector also supported, for the first time, “full analog” video, using a signal definition based loosely on the RS-343 television standard (roughly 0.7 V p-p video with a 75-Ω system impedance). However, separate TTL sync signals, as used in the earlier MDA/CGA/EGA systems, were retained instead of switching to the composite sync-on-video scheme common in television practice.

The new “VGA” timing, at 640 x 480 pixels and 60 Hz refresh, was also a tie to the television world, being in both format and timing essentially a non-interlaced version of the US TV standard.While this level of compatibility with television would very soon be abandoned by the PC industry in the move to ever-increasing pixel formats and refresh rates, this idea would later be revisited as the television and computer display markets converge. The VGA hardware still supported the earlier 720 x 350 and 640 x 200 formats (and in fact these remain in use as “boot” mode formats in modern PCs), but the 640 x 480 mode was intended to be the one used more often in normal operation, and provides a “square-pixel” format with an aspect ratio matching that of standard CRTs (4:3).

The VGA system also introduced, for the first time, a simple system for identifying the display in use. By the time of VGA’s introduction, it was clear that the PC could be connected to any of a number of possible monitors, of varying capabilities. In order to permit the system to determine which of these were in use, and thereby configure itself properly, four pins of the connector were dedicated as “ID bits”. The monitor, or at least its video cable and connector, could ground or leave floating various combinations of these and thereby identify itself as any of 16 possible types. This limitation to a relatively few predefined displays would soon prove to be unacceptable, however, and would be replaced by more sophisticated display identification systems.

The basic VGA connector would be retained by later, more capable “graphics adapter” hardware, such as the “Super VGA” (SVGA) and “extended VGA” (XGA) designs, and these add not only new features to the system but also support for higher and higher pixel counts, or “higher resolutions,” to use the common PC terminology. SVGA introduced 800 x 600 pixels to the standard set, again a 4:3 square-pixel format, and was followed by the XGA 1024 x 768 format. As noted earlier, both of these names are now use to refer to the formats themselves almost exclusively, and the set has grown to include “Super XGA” (SXGA), at 1280 x 1024 pixels (the one 5:4 format in common use), and “Ultra XGA” (UXGA), or 1600 x 1200 pixels. With each increase in pixel count, however, the hardware retained support for the original “VGA” formats, as this was required in order to provide a common “boot-up” environment, and so multifrequency monitors became the norm in PC displays. This term refers to those displays which automatically adapt to any of a wide range of possible input timings; in the PC market, such will always at least support down to the 31.5 kHz horizontal rate required for the standard VGA modes. The development of such capability in the display was one of the primary factors driving the need for better display ID capability, such that the system could determine the capabilities of the display in use at any given time.

Signal Standards for PC Video

As noted above, the analog signal definitions used by the VGA system were loosely based on the RS-343 amplitude standards, or more correctly the European standards which had be developed from the earlier American practices. The reader may recall from the previous topic that one significant difference between the American and European television standards was the absence of “setup”, an amplitude distinction between the “blank” and “black” states, in the latter case. The original IBM VGA hardware provided the same 0.7 V p-p signal, without setup, as was common in European television, but again was distinguished from the TV standards in that the PC relied on the simpler, separate TTL-level sync signals. Both industries, at least, kept 75 Ω as the standard for the video interconnect system’s characteristic impedance.

It is important to note, however, that the VGA specifications were never truly an industry standard in the sense of being formally reviewed and adopted by any standards organization or consortium. Manufacturers wishing to produce “VGA-compatible” hardware did so essentially by “doing what IBM had done,” based on the products already brought to market, along with whatever guidance was to be found in the specifications themselves. Significantly, there was never a formal definition released for the video signal requirements under the VGA “standard,” and this did lead to some confusion and compatibility problems, especially in recent years as video frequencies have increased and users have become more demanding in their image quality expectations. Only recently has a formal set of signal specifications been released, by the Video Electronics Standards Association (VESA).

The lack of formal standards and the slight difference between the various existing video standards made for some confusion in establishing the “correct” signal amplitudes in many systems. Video output circuits, typically either a separate “RAMDAC” IC (a device including both color look-up tables, in random-access memory or RAM, plus a digital-to-analog converter, as shown in Figure 9-5), or as part of an integrated graphics control IC, most often can be set to any desired signal amplitude (within limits), through the selection or adjustment of external components.

 “RAMDAC” PC graphics output. In order to provide maximum flexibility within limited memory space, PC graphics systems began employing “color map” memory stages, coupled to digital-to-analog converters (DACs) to produce the video output. In this example, 1 Mbyte of frame buffer storage, organized as 1k x 1k pixels, each 8 bits “deep,” feeds a 256 location by 24 bit RAM. This memory, whose contents are also written by the host CPU, maps the 8-bit values for each pixel to any of 224, or approximately 16.7 million, possible output values, or 8 bits for each primary color. The color-map memory and output DACs are often integrated into a single component, referred to as a RAMDAC. This technique remains in common use today, even with frame buffer systems providing far more than 24 bits per pixel, as it simplifies the implementation of numerous features.

Figure 9-5 “RAMDAC” PC graphics output. In order to provide maximum flexibility within limited memory space, PC graphics systems began employing “color map” memory stages, coupled to digital-to-analog converters (DACs) to produce the video output. In this example, 1 Mbyte of frame buffer storage, organized as 1k x 1k pixels, each 8 bits “deep,” feeds a 256 location by 24 bit RAM. This memory, whose contents are also written by the host CPU, maps the 8-bit values for each pixel to any of 224, or approximately 16.7 million, possible output values, or 8 bits for each primary color. The color-map memory and output DACs are often integrated into a single component, referred to as a RAMDAC. This technique remains in common use today, even with frame buffer systems providing far more than 24 bits per pixel, as it simplifies the implementation of numerous features.

In addition, such devices could be obtained in versions which did or did not include “setup”, or offered a programmable choice here. The nominal VGA video signal level was, as mentioned above, 0.7 V p-p, approximately the same as both the European television video standard as well as the RS-343 definition of 0.714 V p-p. However, setting an output up to deliver the specified RS-343 signal exactly and then turning off “setup” would often simply drop the peak level by the setup or “pedestal” amplitude (0.054 V), resulting in a signal which peaked at only 0.660 V above the blank level. Thus, the nominal level of PC video signals could vary between 0.660, 0.700, and 0.714 V p-p, and for most PC graphics cards there is a considerable tolerance (±10% is typical). While such a wide range of possible amplitudes may not affect the basic operation of a CRT display (at least not in any way readily noticeable to the casual user), it does cause problems in critical imaging applications, and especially in those display types which require analog-to-digital conversion of such signals (as in common in many LCD and other non-CRT-based monitors).

A separate but equally serious problem for the non-CRT types results from the fact that there was no specification for the stability or skew of the sync signals, especially the horizontal sync, with respect to the video.

Parameter

Specification/comments

Max. luminance voltage

0.700 VDC, + 0.07V/-0.035 V; DC with respect to Return

Video rise/fall time

Max.: 50% of min. pixel clock period; min. 10% of min. pixel clock period; measured at 10-90% points

Video settling time

30% of min. pixel clock period to 5% final full-scale value

Video amplitude mismatch

Max. 6% channel-to-channel over full voltage range.

Video noise injection ratio

+/- 2.5% of maximum luminance voltage

Video channel/channel skew

Maximum of 25% of minimum pixel clock period

Video overshoot/undershoot

Max. of +/- 12% of step voltage, over full voltage range

Sync signal rise/fall time

Max. of 80% of minimum pixel clock period

Sync signal over/undershoot

Max. of 30% of high; no ringing into 0.5-2.4V range

Jitter (between H sync pulses)

15% of pk-pk or 6 sigma min. pixel clock period, 0 Hz to max. horizontal rate, over a minimum of 100K samples.

Next post:

Previous post: