Information Technology Reference
In-Depth Information
In 1967, Management Science initiated a column “Information Systems in Management
Science.” Early definitions of management information systems cited in Zhang et al. (2004, p. 147)
include “an integrated man/machine system for providing information to support the operation,
management, and decision-making functions in an organization” (Davis, 1974) and “the effective
design, delivery and use of information systems in organizations” (Keen, 1980). The Banker and
Kaufmann (2004) survey of management science literature identifies the first HCI article as
Ackoff's (1976) investigation of challenges in handling computer-generated information. Work
on cognitive style and system design occupied researchers in the late 1960s and through the
1970s.
Once a computer was acquired, managers were chained to it almost as tightly as Shackel's
operator and data entry slaves. A system was there to be used. Of course, a threatening or disrup-
tive system could be ignored or resisted. Sociotechnical design, incorporating end-user involve-
ment to facilitate adoption, became part of the management literature (e.g., Mumford, 1971).
In 1970, Xerox PARC was founded and attracted researchers from the labs of Engelbart and
Sutherland. A major PARC focus was to develop languages and tools to support programmers.
Well into the 1970s, most professional programmers, like most managers, were not hands-on
computer users. Programs were flow-charted on paper, written on coding sheets, keypunched by
other people, and run by computer operators. Programmers typically picked up printed output.
Efforts to understand and support this notoriously idiosyncratic and unpredictable skill led to
research into the psychology of programming; Weinberg (1971) was an influential survey.
The first widely read HCI topic appeared in 1972, James Martin's Design of Man-Computer
Dialogues. A comprehensive survey of interfaces for data entry and operation, it began with a
vision of a future in which users “will become the tail that wags the whole dog . . . The computer
industry will be forced to become increasingly concerned with the usage of people, rather than
with the computer's intestines” (pp. 3-4).
1975-1985: HANDS-ON DISCRETIONARY USE, AND THE BIRTH OF CHI
More than a decade after the early visions of empowered computer users, discretionary use was
not widespread. Scientific computing enabled some engineers and programmers to enjoy access
to expensive computers, but this did not carry over to business computing with its efficient divi-
sion of labor. Even at universities, computer centers prevented most student programmers from
even seeing a computer.
Nevertheless, people took up programming because they enjoyed it, and programming became
the first profession to embrace discretionary hands-on computer use. Text editors provided an
alternative to coding sheets. Working as a professional programmer at a computer company, I
made this transition at my first opportunity in 1975. Many of the over one thousand studies of
programming psychology—research surveyed in Shneiderman (1980)—examined programming
skill in isolation, removed from particular organizational and management contexts.
Interest in this new category of discretionary users grew through the late 1970s as students, hob-
byists, and others interacted with time-shared mainframes, minicomputers, and microprocessor-
based home computers. Human Interaction with Computers by Smith and Green (1980) perched on
this cusp. It briefly addressed “the human as a systems component” (the nondiscretionary perspec-
tive); one-third was devoted to research on programming, and the remainder addressed “non-spe-
cialist people,” discretionary users who were not computer specialists: “It's not enough just to
establish what people can and cannot do; we need to spend just as much effort establishing what
people can and want to do” (p. viii; emphasis in the original).
Search WWH ::




Custom Search