Programming-Language Design (Compiler Writing) Part 1

A significant factor in the process of programming is the programming language being used. The language shapes the very thought processes of the programmer, and the quality of the language can greatly influence the quality of the programs produced. The importance of good language design cannot be understated.

It is regrettable that many existing programming languages were, in one way or another, poorly designed. The main cause of this is that too often unsystematic, "seat-of-the-pants" techniques were used to design these languages.

In the last few years, however, some basic principles underlying the design of good programming languages have begun to emerge. Although much has been learned, the existing material is fragmented and often hard to find. This topic attempts to give a unified treatment of the topic, in sufficient depth to be useful to persons actually designing a programming language.

This topic consists of eight sections. Section 3-1 gives a general overview of the problem and places some limits on the scope of the discussion. Section 3-2 introduces some considerations which must enter the design process at the very beginning. Since most programming-language constructs are derived from already existing sources, some consideration is then given to the sources and evaluation of ideas. Section 3-4 discusses, in detail, the goals of programming languages as well as the overall design principles that guide the construction of a language. Section 3-5 is an in-depth discussion of design details; it attempts to survey current ideas on a variety of topics. Section 3-6 discusses how to reduce language size. Section 3-7 completes the coverage of language design by identifying a number of pragmatic issues in designing a language. Section 3-8 comments on the design of the programming language Ada.


OVERVIEW OF THE PROBLEM

In recent years, one of the most prevalent art forms in the programming world has been the design of programming languages. The number of programming languages proposed and designed is extremely large. Even the number of languages for which a translator has been implemented is immense. Sammet (1976) notes 167 such languages in her 1974-1975 roster. Although the first primitive programming languages came into existence over 25 years ago, until recently there was little order in the process of designing new languages.

The early languages were pioneering efforts, exploring a new field. It is not surprising that they were poorly designed. No criticism should be leveled against the designers of FORTRAN; they had quite enough trouble designing and implementing one of the first high-level languages. No one could reasonably expect them to anticipate the requirements and standards applying 25 years later. If there is any criticism to be awarded in connection with FORTRAN, its targets should be the users who have clung so tenaciously to certain obsolete language features during reviews by standards committees, and certain language designers who have so enthusiastically perpetuated FORTRAN’S flaws.

It should be noted that our references to FORTRAN in the preceding paragraph and throughout this topic refer to FORTRAN IV rather than FORTRAN 77.

After the initial development of high-level languages and the implementation of the first few compilers, there ensued a fairly lengthy period in which conscious attempts were made to design new languages without the flaws of the old. Most of these attempts were failures, not so much from a lack of ideas on how to design better languages as from a surplus of ideas. A good example of this process is the notion that "if it could mean something, it should" (Radin and Rogoway, 1965), which led to PL/I.

More recently, the experience from previous mistakes has led to real knowledge about how to build good programming languages. Basic ideas and principles are becoming sufficiently well established that it is practical to state explicit guidelines for language design. Those areas that are not yet understood are being systematically investigated.

This discussion will accordingly attempt to emphasize a systematic, ordered approach to language design. It should be remembered, however, that to do proper justice to many topics it is often necessary to discuss particulars as well as generalities. The field of language design is by no means fully developed, and many areas are not yet very well unified. Also, many areas interrelate so strongly that it is difficult to discuss them separately.

By necessity, this discussion will, however, restrict its coverage. Elaborate descriptions of possible language features will be limited. It is assumed that the potential language designer has sufficient background in programming languages to be aware of the major ideas. Specific features will be discussed for specific reasons, but no attempt will be made to give a general catalog. There are already several such catalogs (Elson, 1973; Pratt, 1975; Nicholls, 1975). A basic proposition of this entire topic is that a good language is not just a haphazard collection of features but a unified whole.

It will be assumed that the languages under discussion are "high-level" languages. The discussion will also be restricted largely to procedural languages for writing software ("software" is used here in its most general sense to mean "programs to be used by someone else"). Much of what is said will be applicable to other classes of languages. The designer of a nonprocedural language will omit consideration of some topics (e.g., control structures), but many general principles will remain applicable. Languages intended for nonsoftware purposes (e.g., for use as a glorified calculator or for investigation of algorithms) will sacrifice some considerations to improve others (a calculator language will sacrifice readability and structure for conciseness and convenience; an algorithm-testing language will sacrifice efficiency for convenience and natural notation), but this will change only the priority of the various considerations.

On-line command systems and query languages are to some extent a separate issue. Some of the same principles apply, but many specialized considerations intervene. Martin (1973) gives a good treatment of such matters.

One further restriction of the discussion will be made. It is assumed that languages are designed with the intent to implement a translator for them, even if the implementation is a purely hypothetical prospect. Languages for other purposes (e.g., to serve as a basis for discussion) are not obliged to follow the rules.

PRELIMINARY CONSIDERATIONS

In the design of a new language, certain matters require thought well before any consideration is given to the details of the design. Proper attention to these in advance can avoid trouble later.

The first and most important question that must be asked is:

Is it necessary to design a new language?

Almost any alternative approach will be simpler and faster than attempting the difficult and time-consuming task of designing a completely new language.

Is there an existing language that can be used to fill the requirement? Even if it requires a new implementation, implementing an existing language is easier and faster than designing and then implementing a new language.

Can an existing language be extended? It is easier to design a clean extension to an existing language, even if the ^extension implies a new compiler, than to design an entire new language. If this approach is taken, however, care should be taken not to make the extension so large and complex that it becomes, in effect, a new language. In such cases, the necessity of retaining some interface to the old language will probably seriously compromise the design of the extension. Also, if one is extending an existing language, it is necessary to select the base language carefully so that the work of extension will be minimized and the extension will fit gracefully into the language. The objective should be to produce a language which is somewhat larger but equally well constructed.

Would it be possible to modify an existing language, possibly using a macroprocessor or something similar? Even a facility for parameterless macros (simply substituting a specified text for each occurrence of a defined identifier) can produce major alterations in the syntax of a language, if skillfully used (e.g., RATFOR as defined by Kernighan and Plauger, 1976). However, the power of this approach for more complex tasks, such as introducing new data structures, is limited.

Serious consideration should be given to these techniques as alternatives to a new language, simply on the grounds of minimizing the work and time involved. There is perhaps no other computer-related problem which looks so temptingly easy and is so terribly hard as doing a good job of language design. Dispense with the notion that it is possible to whip up a design over the weekend and start implementing a translator for it on Monday. A month later, there will still be minor points of language design to settle, and the implementation will have gotten almost nowhere.

Assuming that the decision has been made that none of the preceding approaches will suffice, the next point of interest is:

What is the purpose of the language?

A language is often designed specifically for an area of application. The more attention that is given to restricting the area of . application of the language, the better the language will be for problems in that area. In fact, it is not advisable to attempt to design a general-purpose language suitable for any problem. All such attempts to date have been disappointing (notably PL/I and ALGOL 68). Presently, all evidence indicates that no one knows how to do a proper job of designing a language that will be "good for everything."

Finally, the relation of the new language to existing languages must be considered. Weinberg (1971) discusses the psychological phenomenon of "inhibition," which occurs when an old language and a new language are similar but not identical. The user is subject to serious confusion owing to uncertainty about how much of the old language carries over into the new one. For example, FORTRAN programmers learning PL/I formatted I/O have great trouble with E- and F-type formats, which are similar but not identical to the equivalent FORTRAN constructs.

In summary, it is better to make the new language distinctly different from r&ther than closely similar to any existing language. If the new and old languages are so similar, perhaps the necessity of the new language has not been properly examined.

SOURCES OF IDEAS

Inspiration for programming language design has sprung from several sources in the past. Several of these important sources will be discussed in this section: natural languages, mathematics, existing programming languages, and proper usage and experiments.

We derive much of our current practice in programming languages from natural languages. Obviously, many of our constructs are phrased in a manner which is based on natural language. Less obviously, we also derive more subtle conventions such as the notion of the significant blank, the related idea that (except in strings) any number of blanks is equivalent to one blank, and the concept that the end of a line is usually no more significant than a blank.

Generally speaking, constructs derived from natural language are valuable for their obviousness and readability. They do (at least roughly) what they say. This is a major advantage even to experienced programmers and may be crucial for languages intended for novices and nonprogrammers. On the other hand, the problem of inhibition with regard to other programming languages may operate here. Languages which look too much like English may produce higher error rates as users attempt other English constructions only to discover that they do not work as expected. Using natural language as a model can also result in excessive wordiness. COBOL being an obvious example. Note also that natural languages often contain much ambiguity, which is undesirable in a programming language. Generally speaking, a good strategy is to use natural language as a guide when designing the detailed syntax of new constructs, while avoiding its influence elsewhere.

We next turn to mathematics as a source of ideas. Mathematics has been a significant source for conventions and devices used in programming languages (most obviously the arithmetic expression). It is entirely possible that new programming languages will incorporate other borrowings from mathematics.

It is important, however, to remember that programming is not mathematics. Generally, programmers and mathematicians use different methods and solve different problems while working toward different objectives. It should be noted in particular that, in order to achieve the brevity which is so important to complex formula manipulation, mathematicians often show a complete disregard for clarity and obviousness. This is exemplified in the awesome proliferation of character sets in mathematical notation. While mathematics is a useful source of ideas, particularly with regard to data types, the language designer should be very careful about adopting the mathematical notation for a given concept. Consider, for example, the language APL, which is heavily mathematically oriented. A real implementation problem which arises is the representation of some of the APL symbols on traditional I/O devices.

Existing programming languages can be the best single source of ideas for programming-language designers. Designers must be very careful about including such ideas in their own product, however, because designers in the past have made serious errors in design.

A few basic principles can be enunciated for distinguishing good ideas worthy of perpetuation from bad ideas worthy only of extinction. Perhaps the major principle is to ask "Why was it done that way?" Once you obtain an answer to this question, ask "Is that reason (still) valid?" Often the answer to this question will be no. For example, FORTRAN’S strange restrictions on array subscripts date back to its first implementation: the implementors wished to make efficient use of the addressing features of their hardware and were afraid that this could not be done if any expression were allowed as a subscript. Although this can perhaps be considered reasonable (or at least understandable) in the circumstances, it certainly is not defensible today, considering the serious reduction in usability that results.

It is also worthwhile to remember that even if the overall verdict on a language is "a bad design," this does not mean that it does not conceal worthwhile features somewhere deep inside. For example, although APL can be criticized on many fronts, its powerful array-manipulation operators may well be worth copying.

Similarly, the fact that a feature is commonly available may not imply that it is a good idea. Many languages have followed ALGOL 60′s lead in allowing the size of arrays to be decided at run time, a feature which introduces considerable implementation problems and interferes with compile-time error checking. This feature may be of only limited value in certain applications areas. The phenomenon of default declarations, inherited from FORTRAN, is another example of bad language design. This feature in particular illustrates the fact that some currently popular features are in fact strongly detrimental to program quality.

Another consideration is that a poorly designed feature may be the result of trying to meet two requirements at the same time, with the result that one of them (perhaps the more conspicuous one) is poorly served. An example of this is the pass-as-if-by-macro-substitution parameters of ALGOL 60 (also known as "pass-by-name"), which appear to have resulted from confusion of the two notions "procedure," a logical concept, and "macro," an implementation technique for procedure.

Some desirable features can be identified from observing the way programmers use the facilities of existing languages. In particular, it is possible to derive restrictions that can improve error checking or readability while not restricting the programmer, by observing what parts of a language are rarely used. For example, some of the motivation for the anti-GOTO forces comes from the observation that good programmers do not use the full power of the GOTO—they use the GOTO to simulate less general but more understandable structures.

Similarly, by observing established patterns of usage one can determine what features are desirable. For example, measurements of actual programs by Alexander (1972) have established that two-thirds of all IFs do not have an ELSE clause; so it would probably be unreasonable to require the ELSE in all IFs. The relatively recent notion of experimenting with design changes (Gannon, 1975) under controlled conditions offers basically the same types of conclusions. Undoubtedly research will continue in the areas of measuring programming-language use and experimenting in programming-language design.

GOALS AND DESIGN PHILOSOPHIES OF PROGRAMMING LANGUAGES

When a programming language is designed, particular attention must be given to the goals of the language. A number of important goals, such as human communication, the prevention and detection of errors, usability, program effectiveness, compilability, efficiency, machine independence, and simplicity, are described in turn.

This section is also concerned with design philosophies. Philosophies such as uniformity, orthogonality, generality, language modularity, minimality, and level of abstraction are discussed.

Human Communication

While it is important to communicate efficiently with the computer, detect errors well, and so on, the most basic goal of a programming language is communication between human beings. If a program cannot be understood by humans, it is difficult to verify and it cannot be maintained or modified. Even if the program is still clear to its author, this is a strictly temporary condition. It has been suggested by Kernighan and Plauger (1976) that readability is the best single index of good programming. Certainly, one of the crucial factors in achieving readable programs is an understandable programming language.

It is important to ‘realize that the problems of human communication cannot be left entirely to comments or external documentation. Programmers dislike writing excessive comments and tend to avoid them. External documentation is all too often out-of-date and incomplete. Also, this can sometimes apply to internal documentation. A good and very reliable form of documentation is the program itself—if it is readable. Programming languages must be designed with constant attention to clarity and understandability.

It is vital to distinguish between readability and writability. It is important to be able to write programs easily. It is necessary to be able to read programs easily. The readability of programs is far more important in the long run than their writability (Hoare, 1973). This is of particular significance because many "powerful" and "convenient" features tend to produce monumentally obscure programs (e.g., APL "one-liners").

The most basic implication of the readability problem for the design of programming languages is that the syntax must reflect the semantics. If the syntax of the programming language does not accurately and completely reflect the manipulations being carried out, there is little hope for understanding the program. Making the syntax match the semantics implies several things. The syntax must clearly state what is happening in sufficient detail to be understandable but not in such unnecessary detail as to be overwhelming. Care should be taken to ensure that a construct performing a particular operation does not read as if it were doing something similar but not quite the same. Furthermore, it is most undesirable for significant actions to be undertaken without any indication in the syntax whatsoever. This rule may seem obvious, but a considerable number of languages violate it (notably PL/I with its conversions and ON-units, and ALGOL 68 with its coercions).

A most unfortunate choice of terminology in the field of computer science is the term syntactic sugar because it implies that making language constructs understandable is an unnecessary frill. It is vital that the syntax of a programming language should reflect human thought patterns, rather than the more "elegant" but much more obscure patterns of, for example, the lambda calculus (Church, 1941) (or, still worse, the peculiarities of a particular computer’s instruction set).

A final facet that must be mentioned with regard to human communication through programming languages is that programmers are not computers. Just because a compiler can understand a given construct there is no guarantee that a programmer can. The limitations of the human mind make it quite easy for a complicated structure to become unmanageable, while the compiler has no problem. For example, a simple stack algorithm handles nested constructs for a compiler. Human beings, however, probably do not function in a stacklike manner, and deeply nested constructs can very easily become completely incomprehensible. Humans are ill-equipped to cope with situations where the effect of a construct depends in a complex way on context (as can arise through the use of PL/I ON units). It is also quite possible for a construct to be ambiguous to a human while it is clear and obvious to a compiler (Weinberg, 1971). A good example of this is the simple arithmetic expression "a/b/c". In general, a parser will not consider this ambiguous, but how many human programmers would write it without using parentheses? Although it may be more time consuming for a compiler to disallow such a construct, the protection can be worth it.

Prevention and Detection of Errors

It is a recognized fact that programmers make errors. Although much current work in the field of software engineering is directed toward stopping errors at the source (the programmer), there is no foreseeable chance that errors will be eliminated entirely. It is therefore necessary to assist the programmer in the task of not only preventing but also detecting, identifying, and correcting errors. A good programming language can be a major factor in this. In fact, the efforts of Lampson et al. (1977) in the design of EUCLID, a PASCAL-based language intended for the expression of system programs which are to be verified, indicates that a programming language can significantly affect the ease with which a program can be verified to be error-free. Undoubtedly, more research will be conducted in this important area.

One of the most fundamental services a high-level language performs is to make certain classes of errors impossible. A good example of this is the old assembly-language hazard of branching into the middle of the data. Given a properly functioning compiler, there is no chance whatsoever that an ALGOL programmer will ever commit this blunder. This is an excellent example of a point made by Wirth (1974) that high-level languages are not just clerical aids but are even more valuable in that the restrictions they impose on the programmer actually assist in producing error-free programs.

A similar benefit of high-level languages is that certain classes of errors are rendered unlikely, though not impossible. For example, the programmer is much less likely to make an error of control flow in a language such as PASCAL than in assembly language, because PASCAL provides constructs which organize control flow better than the conditional branches of assembly language (or FORTRAN!). This sort of error prevention is usually a question of providing the programmer with constructs which are convenient to use, are close to the programmer’s thought processes, and impose reasonable restrictions on the program.

Even with all possible help in the prevention of errors, some errors will still be made. It is therefore desirable that the programming language should detect errors and render them as conspicuous as possible, so that they can be eliminated.

Error detection is based on useful redundancy. An error is detected by noticing that two portions of the program, carrying some of the same information, do not match each other.

The explicit declaration of variables exemplifies beneficial redundancy. When a variable is referenced, the compiler can check that the variable is of a type for which the particular operation is meaningful. If the type is inappropriate, an error has been made.

It should be noted that not all redundancy is useful redundancy. An excellent example of redundancy that is not only useless but can be actively harmful is the necessity in FORTRAN of repeating COMMON definitions in each program component using them. An error in one such repetition generally goes undetected and will usually have serious consequences.

Useful redundancy can easily be built into a language. The most significant sources of useful redundancy are mandatory declarations of all variables and strong type checking (of the type used in ALGOL and PASCAL). One side issue of strong type checking is the absence of automatic conversions or coercions which attempt to turn the data type supplied into the one which is "obviously" desired in the context. Such behind-the-scenes fudging defeats much of the value of type checking and can result in obscure code. For example, in ALGOL 68 (which has coercions), if x is a pointer-to-integer, the one variable which cannot be altered by the assignment

tmp4C8-5_thumb

is x itself! Dereferencing (ALGOL 68 terminology for pointer chasing), conversion, etc., should be present only if explicitly requested.

Default declarations, as in FORTRAN, are a good example of unnecessary loss of useful redundancy. Default declarations interfere seriously with type checking, encourage programming sloppiness, and often are so complex as to be confusing (how many human beings are aware of PL/I’s exact defaults?).

Another area of useful redundancy which is not as well used as it should be is selection by name rather than by position. Everyone knows that it is much easier and clearer to say "x.size" rather than "x[5]" when you mean the "size" field of the "x" structure. It is generally acknowledged that a CASE statement which requires the programmer to label the alternatives with the values that cause them to be selected is much clearer and less error-prone than an unlabeled version.

Having discussed some of the more important aspects of error prevention, let us turn our attention to error detection. Error detection must be reliable. Unreliable error detection is worse than none at all, since programmers tend to rely on it when they should not. This can result in errors that are almost impossible to find. An example of this is the " union" type of ALGOL 68 (e.g., the value of a variable can be either real or integer), which cannot be checked at compile time. If one is combining separately compiled modules into a single program, it is almost impossible to provide completely reliable checking for unions without ruinous expense.

Error detection should also be accurate. The user should be told exactly where and how the error occurred, so that it can be analyzed and corrected. Although the biggest single factor in this is the compiler (see Chap. 5), language design also has a large impact. For example, a language with unrestricted pointer variables (e.g.., PL/I) will make the detection and analysis of runaway-pointer errors almost impossible.

An important aspect of error detection pertains to the question of compile-time versus run-time detection. Generally speaking, languages should be designed to permit error detection at compile time. This is generally much cheaper and usually makes the generation of intelligent error messages much easier. It is also easier (and much safer!) to continue a compilation to find further errors than to continue execution after an error. Whenever possible, language rules should be designed so that they can be easily enforced at compile time. For example, it is generally difficult and expensive to detect at run time that a pointer variable points to an object of the wrong type. If pointer variables are declared not as "POINTER" but as "POINTER TO type," such error detection can be done almost entirely at compile time.

Some classes of errors simply cannot be discovered at compile time—for example, the general problem of deciding whether an array subscript is in range. The run-time support routines should be set up so that run-time checks are simple and not excessively expensive. Ideally, it should be so simple and cheap to make the run-time checks that the checks could be left in for "production" runs, when error detection is even more vital. As Hoare states (Hoare, 1973), "What would we think of a sailing enthusiast who wears his life jacket when training on dry land, but takes it off as soon as he goes to sea?"

An example of how run-time checking can be made impractically expensive is the combination of circumstances in PL/I which makes it possible to get "dangling pointers" (pointers which point to now-deallocated objects). This sort of error cannot be found at compile time and is difficult and costly to check at run time.

Usability

We now turn to the topic of usability. It is obviously important that a language be easy to use. Certain aspects of this property deserve specific mention. The most important consideration, readability, has already been discussed and thus will not be covered here.

It is desirable that the constructs of a.language be easy to learn and to remember. Once programmers are familiar with the language, it should not be necessary for them to consult manuals constantly. The language should be so simple and straightforward that the proper way to do something is reasonably apparent. On the other hand, there should not be many different ways to do the same thing, as the programmer will then flounder helplessly trying to decide which is best. The best example of general violation of this aspect of usability is ALGOL 68, which can do almost anything—if the programmer can only figure out how! PL/I is a good example of a language which provides many different ways to do something (most languages make do with one sort of integer; PL/I has three!).

Another important principle is "no surprises," also known as the "rule of least astonishment." A programmer who has made the effort to learn the language should be able to predict the behavior of a construct accurately if it can be predicted at all. PL/I is the major bad example here; it is strewn with constructs which do not do what the programmer thinks, as exemplified with FIXED division (see Hughes, 1973).

Some languages are much adored for their flexibility. Flexibility is like redundancy—it can be useful or useless. It is obviously necessary for languages to be able to meet demands that cannot be fully anticipated by their designers. However, useless flexibility can be a problem. An example of this is the feature of ALGOL 60 which allows a variable name to be redeclared in an inner BEGIN block: the compiler has to cope with several variables having the same name, unpleasant possibilities for programmer errors emerge, and the feature is seldom used (with the obvious exception of subprograms). Generally speaking, useless flexibility should be eliminated whenever possible.

If it does not interfere too seriously with other considerations, a language should be concise. Unfortunately, conciseness does tend to interfere strongly with readability (for example, in APL). There is also a conflict with useful redundancy in that repeating information (the essence of redundancy) often requires more writing (for example, declaring all variables). Conciseness should probably be given the lowest priority among the various design issues.

Next post:

Previous post: