Virtual machine (Inventions)

The invention: The first computer to swap storage space between its random access memory (RAM) and hard disk to create a larger “virtual” memory that enabled it to increase its power.

The people behind the invention:

International Business Machines (IBM) Corporation, an
American data processing firm Massachusetts Institute of Technology (MIT), an American
university
Bell Labs, the research and development arm of the American Telephone and Telegraph Company

A Shortage of Memory

During the late 1950′s and the 1960′s, computers generally used two types of data storage areas. The first type, called “magnetic disk storage,” was slow and large, but its storage space was relatively cheap and abundant. The second type, called “main memory” (also often called “random access memory,” or RAM), was much faster. Computation and program execution occurred primarily in the “central processing unit” (CPU), which is the “brain” of the computer. The CPU accessed RAM as an area in which to perform intermediate computations, store data, and store program instructions.
To run programs, users went through a lengthy process. At that time, keyboards with monitors that allowed on-line editing and program storage were very rare. Instead, most users used typewriter-like devices to type their programs or text on paper cards. Holding decks of such cards, users waited in lines to use card readers. The cards were read and returned to the user, and the programs were scheduled to run later. Hours later or even overnight, the output of each program was printed in some predetermined order, after which all the outputs were placed in user bins. It might take as long as several days to make any program corrections that were necessary.
Because CPUs were expensive, many users had to share a single CPU. If a computer had a monitor that could be used for editing or could run more than one program at a time, more memory was required. RAM was extremely expensive, and even multimillion-dollar computers had small memories. In addition, this primitive RAM was extremely bulky.


Virtually Unlimited Memory

The solution to the problem of creating affordable, convenient memory came in a revolutionary reformulation of the relationship between main memory and disk space. Since disk space was large and cheap, it could be treated as an extended “scratch pad,” or temporary-use area, for main memory. While a program ran, only small parts of it (called pages or segments), normally the parts in use at that moment, would be kept in the main memory. If only a few pages of each program were kept in memory at any time, more programs could coexist in memory. When pages lay idle, they would be sent from RAM to the disk, as newly requested pages were loaded from the disk to the RAM. Each user and program “thought” it had essentially unlimited memory (limited only by disk space), hence the term “virtual memory.”
The system did, however, have its drawbacks. The swapping and paging processes reduced the speed at which the computer could process information. Coordinating these activities also required more circuitry. Integrating each program and the amount of virtual memory space it required was critical. To keep the system operating accurately, stably, and fairly among users, all computers have an “operating system.” Operating systems that support virtual memory are more complex than the older varieties are.
Many years of research, design, simulations, and prototype testing were required to develop virtual memory. CPUs and operating systems were designed by large teams, not individuals. Therefore, the exact original discovery of virtual memory is difficult to trace. Many people contributed at each stage.
The first rudimentary implementation of virtual memory concepts was on the Atlas computer, which was constructed in the early 1960′s in England, at the University of Manchester. It coupled RAM with a device that read a magnetizable cylinder, or drum, which meant that it was a two-part storage system.
In the late 1960′s, the Massachusetts Institute of Technology (MIT), Bell Telephone Labs, and the General Electric Company (later Honeywell) jointly designed a high-level operating system called MULTICS, which had virtual memory.
During the 1960′s, IBM worked on virtual memory, and the IBM 360 series supported the new memory system. With the evolution of engineering concepts such as circuit integration, IBM produced a new line of computers called the IBM 370 series. The IBM 370 supported several advances in hardware (equipment) and software (program instructions), including full virtual memory capabilities. It was a platform for a new and powerful “environment,” or set of conditions, in which software could be run; IBM called this environment the VM/370. The VM/370 went far beyond virtual memory, using virtual memory to create virtual machines. In a virtual machine environment, each user can select a separate and complete operating system. This means that separate copies of operating systems such as OS/360, CMS, DOS/360, and UNIX can all run in separate “compartments” on a single computer. In effect, each operating system has its own machine. Reliability and security were also increased. This was a major breakthrough, a second computer revolution.
Another measure of the significance of the IBM 370 was the commercial success and rapid, widespread distribution of the system. The large customer base for the older IBM 360 also appreciated the IBM 370′s compatibility with that machine. The essentials of the IBM 370 virtual memory model were retained even in the 1990′s generation of large, powerful mainframe computers. Furthermore, its success carried over to the design decisions of other computers in
the 1970′s.
The second-largest computer manufacturer, Digital Equipment Corporation (DEC), followed suit; its popular VAX minicomputers had virtual memory in the late 1970′s. The celebrated UNIX operating system also added virtual memory. IBM’s success had led to industry-wide acceptance.

Consequences

The impact of virtual memory extends beyond large computers and the 1970′s. During the late 1970′s and early 1980′s, the computer world took a giant step backward. Small, single-user computers called personal computers (PCs) became very popular. Because they were single-user models and were relatively cheap, they were sold with weak CPUs and deplorable operating systems that did not support virtual memory. Only one program could run at a time. Larger and more powerful programs required more memory than was physically installed. These computers crashed often.
Virtual memory raises PC user productivity. With virtual memory space, during data transmissions or long calculations, users can simultaneously edit files if physical memory runs out. Most major PCs now have improved CPUs and operating systems, and these advances support virtual memory. Popular virtual memory systems such as OS/2, Windows/DOS, and MAC-OS are available. Even old virtual memory UNIX has been used in PCs.
The concept of a virtual machine has been revived, in a weak form, on PCs that have dual operating systems (such as UNIX and DOS, OS/2 and DOS, MAC and DOS, and UNIX and DOS combinations).
Most powerful programs benefit from virtual memory. Many dazzling graphics programs require massive RAM but run safely in virtual memory. Scientific visualization, high-speed animation, and virtual reality all benefit from it. Artificial intelligence and computer reasoning are also part of a “virtual” future.
See also Colossus computer; Differential analyzer; ENIAC computer; IBM Model 1401 computer; Personal computer; Robot (industrial); SAINT; Virtual reality.

Next post:

Previous post: