SYSTEMS THEORY

Systems theory is much more (or perhaps much less) than a label for a set of constructs or research methods. The term systems is used in many different ways (Boguslaw 1965; 1981, pp. 29-46). Inevitably this creates considerable confusion. For some it is a ”way” of looking at problems in science, technology, philosophy, and many other things; for others it is a specific mode of decision making. In the late twentieth-century Western world it has also become a means of referring to skills of various kinds and defining professional elites. Newspaper ”want ads” reflect a widespread demand for persons with a variety of ”system” skills, for experts in ”systems engineering,” ”systems analysis,” ”management systems,” ”urban systems,” ”welfare systems,” and ”educational systems.”

As a way of looking at things, the ”systems approach” in the first place means examining objects or processes, not as isolated phenomena, but as interrelated components or parts of a complex. An automobile may be seen as a system; a car battery is a component of this system. The automobile, however, may also be seen as a component of a community or a national transportation system. Indeed, most systems can be viewed as subsystems of more encompassing systems.

Second, beyond the idea of interrelatedness, systems imply the idea of control. This always includes some more or less explicit set of values. In some systems, the values involved may be as simple as maintaining a given temperature range. The idea of control was implicit in Walter B. Cannon’s original formulation of the concept of homeostasis. Cannon suggested (Cannon 1939, p. 22) that the methods used by animals to control their body temperatures within well-established ranges might be adapted for use in connection with other structures including social and industrial organizations. He referred to the body’s ability to maintain its temperature equilibrium as homeostasis.

A third idea involved in the system way of looking at things is Ludwig von Bertalannfy’s search for a ”general systems theory” (von Bertalannfy 1968; Boguslaw 1982, pp. 8-13). This is essentially a call for what many would see as an interdisciplinary approach. Von Bertalannfy noted the tendency toward increased specialization in the modern world and saw entire disciplines—physics, biology, psychology, sociology, and so on—encapsulated in their private universes of discourse, with little communication between any of them. He failed to note, however, that new interdisciplinary disciplines often quickly tend to build their own insulated languages and conceptual cocoons.

A fourth idea in the systems approach to phenomena is in some ways the most pervasive of all. It focuses on the discrepancy between objectives set for a component and those required for the system. In organizations this is illustrated by the difference between goals of individual departments and those of an entire organization. For example, the sales department wants to maximize sales, but the organization finds it more profitable to limit production, for a variety of reasons. If an entire community is viewed as a system, a factory component of this system may decide that short-term profitability is more desirable as an objective than investment in pollution-control devices to protect the health of its workers and community residents. Countless examples of this sort can be found. They all seem to document the idea that system objectives are more important than those of its subsystems. This is a readily understandable notion with respect to exclusively physical systems. When human beings are involved on any level, things become much more complicated.

Physical components or subsystems are not expected to be innovative. Their existence is ideal when it proceeds in a ”normal” routine. If they wear out they can be replaced relatively cheaply, and if they are damaged they can be either repaired or discarded. They have no sense of risk and can be required to work in highly dangerous environments twenty-four hours a day, seven days a week, if necessary. They do not join unions, never ask for increases in pay, and are completely obedient. They have no requirements for leisure time, cultural activities, or diversions of any kind. They are completely expendable if the system demands sacrifices. They thrive on authoritarian or totalitarian controls and cannot deal with the notion of democracy.

As a specific mode of decision making, it is this top-down authoritarianism that seems to characterize systems theory when it is predicated on a physical systems prototype. Computerization of functions previously performed by human beings ostensibly simplifies the process of converting this aspect of the theory into action. Computer hardware is presumably completely obedient to commands received from the top; software prepared by computer programers is presumably similarly responsive to system objectives. Almost imperceptibly, this has led to a condition in which systems increasingly become seen and treated as identical to the machine in large-scale ”man-machine systems.” (The language continues to reflect deeply embedded traditions of male chauvinism.)

These systems characteristically have a sizable computerized information-processing subsystem that keeps assuming increasing importance. For example the U.S. Internal Revenue Service (IRS) obviously has enormous quantities of information to process. Periodically, IRS officials feel the necessity to increase computer capacity. To accomplish this, the practice has been to obtain bids from computer manufacturers. One bid, accepted years ago at virtually the highest levels of government, proposed a revised system costing between 750 million and one billion dollars.

Examination of the proposal by the congressional Office of Technology Assessment uncovered a range of difficulties. Central to these was the fact that the computer subsystem had been treated as the total system (perhaps understandably since the contractor was a computer corporation). The existing body of IRS procedures, internal regulations, information requirements, and law (all part of the larger system) was accepted as an immutable given. No effort had been made to consider changes in the larger system that could conceivably eliminate a significant portion of the massive computer installation (Office of Technology Assessment 1972).

Almost two decades after attention had been called to these difficulties, system problems at the IRS continued to exist. A proposed Tax System Modernization was formulated to solve them. The General Accounting Office raised questions about whether this proposal, estimated to cost several billion dollars, was in fact ”a new way of doing business” or simply intended to lower costs and increase efficiency of current operations. Moreover, the Accounting Office suggested that the lack of a master plan made it difficult to know how or whether the different component subsystems would fit together. Specifically, for example, it asked whether the proposal included a telecommunications subsystem and, if so, why such an item had not been included among the budgeted items (Rhile 1990).

To exclude the larger system from consideration and assume it is equivalent to a subsystem is to engage in a form of fragmentation that has long been criticized in related areas by perceptive sociologists (see Braverman 1974; Kraft 1977). Historically, fragmentation has led to deskilling of workers, that is, replacing craft tasks with large numbers of relatively simpler tasks requiring only semiskilled or unskilled labor. This shields the larger system from scrutiny and facilitates centralization of control and power. It also facilitates computerization of work processes and even more control.

In the contemporary industrial and political worlds, power is justified largely on the basis of ”efficiency.” It is exercised largely through monopolization of information. Various forms of social organization and social structure can be used for the exercise of this power. Systems theory focuses not on alternative structures but, rather, on objectives, a subset of what sociologists think of as values. To hold power is to facilitate rapid implementation of the holder’s values.

Fragmentation, in the final analysis, is an effort to divide the world of relevant systems into tightly enclosed cubbyholes of thought and practice controlled from the top. This compartmentalization is found in both government and private enterprises. The compartments are filled with those devoid of genuine power and reflect the limitation of decisions available to their occupants. Those at the summit of power pyramids are exempt from these constraints and, accordingly, enjoy considerably more ”freedom” (Pelton, Sackmann, and Boguslaw 1990).

An increasingly significant form of fragmentation is found in connection with the operation of many large-scale technological systems. Sociologist Charles Perrow has, in a path-breaking study, examined an enormous variety of such systems. He has reviewed operations in nuclear power, petrochemical, aircraft, marine, and a variety of other systems including those involving dams, mines, space, weapons, and even deoxyribonucleic acid (DNA). He developed a rough scale of the potential for catastrophe, assessing the risk of loss of life and property against expected benefits. He concluded that people would be better off learning to live without some, or with greatly modified, complex technological systems (Perrow 1984). A central problem he found involved ”externalities,” the social costs of an activity not shown in its price, such as pollution, injuries, and anxieties. He notes that these social costs are often borne by those who do not even benefit from the activity or are unaware of the externalities.

This, of course, is another corollary to the fragmentation problem. To consider the technological system in isolation from the larger social system within which it is embedded is to invite enormous difficulties for the larger system while providing spurious profits for those controlling the subsystem.

Another interesting manifestation of the fragmentation problem arises in connection with two relatively new disciplines that address many problems formerly the exclusive province of sociology: operations research and management science. Each of these has its own professional organization and journal.

Operations research traces its ancestry to 1937 in Great Britain when a group of scientists, mathematicians, and engineers was organized to study some military problems. How do you use chaff as a radar countermeasure? What are the most effective bombing patterns? How can destroyers best be deployed if you want to protect a convoy?

The efforts to solve these and related problems gave rise to a body of knowledge initially referred to as Operations Analysis and subsequently referred to as Operations Research. A more or less official definition of the field tells us Operations Research is concerned with scientifically deciding how to best design and operate man-machine systems usually under conditions requiring the allocation of scarce resources. In practice, the work of operations research involved the construction of models of operational activities, initially in the military, subsequently in organizations of all kinds. Management science, a term perhaps more congenial to the American industrial and business ear, emerged officially as a discipline in 1953 with the establishment of the Institute of Management Sciences.

In both cases, the declared impetus of the discipline was to focus on the entire system, rather than on components. One text points out that subdivisions of organizations began to solve problems in ways that were not necessarily in the best interests of the overall organizations. Operations research tries to help management solve problems involving the interactions of objectives. It tries to find the ”best” decisions for ”as large a portion of the total system as possible” (Whitehouse and Wechsler 1976).

Another text, using the terms management science and operations research, interchangeably defines them (or it) as the ”application of scientific procedures, techniques, and tools to operating, strategic, and policy problems in order to develop and help evaluate solutions” (Davis, McKeown, and Rakes 1986, p. 4).

The basic procedure used in operations research/management science work involves defining a problem, constructing a model, and, ultimately, finding a solution. An enormous variety of mathematical, statistical, and simulation models have been developed with more or less predictable consequences. ”Many management science specialists were accused of being more interested in manipulating problems to fit techniques than . . . (working) to develop suitable solutions” (Davis, McKeown, and Rakes 1986, p. 5). The entire field often evokes the tale of the fabled inebriate who persisted in looking for his lost key under the lamppost, although he had lost it elsewhere, because ”it is light here.”

Under the sponsorship of the Systems Theory and Operations Research program of the National Science Foundation, a Committee on the Next Decade in Operations Research (CONDOR) held a workshop in 1987. A report later appeared in the journal Operations Research. The journal subsequently asked operation researchers to comment on the report (Wagner et al. 1989). One of the commentators expressed what appears to be a growing sentiment in the field by pointing out the limitations of conventional modeling techniques for professional work. Criticizing the CONDOR report for appearing to accept the methodological status quo, he emphasized the character of models as ”at best abstractions of selected aspects of reality” (Wagner et al. 1989). He quoted approvingly from another publication, ”thus while exploiting their strengths, a prudent analyst recognizes realistically the limitations of quantitative methods” (Quade 1988).

This, however, is an unfortunate repetition of an inaccurate statement of the difficulty. It is not the limitations of quantitative methods that is in question but rather the recognition of the character of the situations to which they are applied. Sociologists distinguish between established situations, those whose parameters can be defined precisely and for which valid analytic means exist to describe meaningful relationship within them and emergent situations, whose parameters are known incompletely and for which satisfactory analytic techniques are not available within the time constraints of necessary action (Boguslaw [1965] 1981). In established situations mathematical or statistical models are quite satisfactory, along with other forms of rational analysis. In emergent situations, however, they can yield horrendous distortions. Fifty top U.S. corporation executives, when interviewed, recognized and acted upon this distinction more or less intuitively, although the situations presented to them were referred to as Type 1 and Type 2, respectively (Pelton, Sackmann, and Boguslaw 1990).

Individual persons, organizations, or enterprises may be viewed, on the one hand, as self-contained systems. On the other, they may be viewed as subsystems of larger social systems. Unfortunately, efforts are continually made to gloss over this dichotomy through a form of fragmentation, by treating a subsystem or collection of subsystems as equivalent to a larger system. It is this relationship between system and subsystem that constitutes the core of the dilemma continuing to confront systems theory.

Achieving a satisfactory resolution of the discrepancy between individual needs and objectives of the systems within which individuals find themselves embedded or by which they are affected remains an unsolved problem as the twentieth century draws to a close.

Next post:

Previous post: