The immediacy of subject matter in social science underscores the importance of ethical issues in research by social scientists. This is particularly true in sociology. A rather small percentage of sociologists use historical documents or cultural products as data. The majority rely upon interviews with actively cooperating subjects, records relating to persons still living or recently alive, unobtrusive observation of live actors, or participant studies within interacting groups. Sociological research typically focuses on relatively large study populations and poses questions relevant to many dimensions of individual and social life. Both the process and application of sociological inquiry may conceivably affect large numbers of subjects in an adverse manner. Thus, the question of ”right” and ”wrong” in research has been a continual (though not always powerful or explicit) concern within the profession.
Ethics may be conceptualized as a special case of norms governing individual or social action. In any individual act or interpersonal exchange, ethics connotes principles of obligation to serve values over and above benefits to the people who are directly involved. Examination of ethical standards in any collectivity provides insights into its fundamental values; identification of ethical issue provides clues to its basic conflicts. This is as true of sociology as a profession as it is of other social systems.
The most abstract and general statements about ethics in sociological literature reflect broad agreement about the values that social inquiry should serve. Bellah (1983) writes that ethics constitutes an important, though typically implicit, topic in the thinking of sociology’s founders (such as Durkheim and Weber) and leading modern practitioners (such as Shils and Janowitz). Even while consciously striving to distinguish their emerging discipline as a science free of values and moralizing, the early sociologists appeared to have a distinct ethical focus. The discipline’s founders implied and sometimes stated that sociology necessarily involved ethical ends, such as identification of emerging social consensus or the development of guidelines for assessing social good. Modern sociologists have emphasized improvement of society’s understanding of itself as the discipline’s principal ethical end, as opposed to determining a specific direction or developing technology for social change. In the broadest sense, contemporary sociologists seem to consider the raising of consciousness as quintessentially ethical activity and social engineering by private or parochial interests as ethically most objectionable. In the phraseology of Edward Shils, this means contributing to ”the self-understanding of society rather than its manipulated improvement” (Shils 1980, p. 76).
Dedication to advancement of society’s understanding of itself through diverse scientific approaches may comprise the fundamental ethic of sociology. A Code of Ethics published by the American Sociological Association (ASA) in 1989 (American Sociological Association 1989) gave concrete expression to this ethic. Concentrating primarily on research, the Code of Ethics emphasized three specific areas of concern: (1) full disclosure of motivations for and background of research; (2) avoidance of material harm to research subjects, with special emphasis on issues of confidentiality; and (3) qualifications to the technical expertise of sociology.
The first area appeared concerned primarily with a fear among sociologists that agencies of social control (such as military or criminal justice units) may seek intelligence under the guise of social research. Thus, the code advised sociologists not to ”misuse their positions as professional social scientists for fraudulent purposes or as a pretext for gathering intelligence for any organization or government.” The mandate for disclosure has implications involving relations among professionals, between professionals and research subjects, and between professionals and the public. Another provision of the code read, ”Sociologists must report fully all sources of financial support in their publications and must note any special relation to any sponsor.” (p. 1)
The second area of concern in the code placed special emphasis on assurance of confidentiality to research subjects. It stressed the need for extraordinary caution in making and adhering to commitments. As if to recognize the absence of legal protection for confidentiality in the research relationship and to mandate its protection nevertheless, the code stated: ”Sociologists should not make any guarantees to respondents, individuals, groups, or organizations—unless there is full intention and ability to honor such commitments. All such guarantees, once made, must be honored” (p. 2).
As a subject of professional ethics, the third area is extraordinary. Provisions mandating disclosure of purpose and assurance of confidentiality might appear in the code of ethics of any profession dealing regularly with human clients or subjects. But it is surprising to find, as a provision in the 1989 ASA Code of Ethics, the mandate that sociologists explicitly state the shortcomings of methodologies and the openness of findings to varying interpretation. The following quote illustrates provisions of this nature:
Since individual sociologists vary in their research modes, skills, and experience, sociologists should always set forth ex ante the limits of their knowledge and the disciplinary and personal limitations that condition the validity of findings. To the best of their ability, sociologists should . . . disclose details of their theories, methods and research designs that might bear upon interpretation of research findings. Sociologists should take particular care to state all significant qualifications on the findings and interpretations of their research. (p. 2)
Themes in the 1989 Code of Ethics dealing with disclosure and confidentiality reflect widely shared values and beliefs in the profession. Historically, sociology has stood out among the learned professions as critical of the authority of established institutions such as governments and large business firms. But propositions about the limitations of theories and methodologies and the openness of findings to varying interpretation suggest conflict. In the late twentieth century, sociological methodologies encompassed both highly sophisticated mathematical modeling of quantitative data and observation and theory building based entirely on qualitative techniques. Acknowledgment of the legitimacy of these differences in an ethical principle reflects a strenuous attempt by sociology as a social system to accommodate subgroups whose basic approaches to the discipline are inconsistent with each other in important respects.
A more recent formulation of the ASA Code of Ethics, published in 1997 (American Sociological Association 1997), restates the basic principals of serving the public good through scientific inquiry and avoiding harm to individuals or groups studied. But a shift in emphasis appears to have occurred. The 1989 Code explicitly cited the danger of governmental or corporate exploitation of the sociologist’s expertise. The 1997 Code, though, stresses ethical challenges originating primarily from the researcher’s personal objectives and decisions.
The 1997 Code of Ethics, for example, contains a major section on conflict of interest. According to this section, ”conflicts of interest arise when sociologists’ personal or financial interests prevent them from performing their professional work in an unbiased manner” (p. 6; emphasis added). A brief item on ”disclosure” asserts an obligation by sociologists to make known ”relevant sources of financial support and relevant personal or professional relationships” that may result in conflicts of interest vis-a-vis to employers, clients, and the public (p. 7).
The two most extensive sections in the 1997 Code are those on confidentiality and informed cnsent. The directives addressing confidentiality place extraordinary responsibility on the individual sociologist. Pertinent language states that ”confidential information provided by research participants, students, employees, clients, or others is treated as such by sociologists even if there is no legal protection or privilege to do so” (emphasis added). The Code further instructs sociologists to ”inform themselves fully about all laws and rules which may limit or alter guarantees of confidentiality” and to discuss ”relevant limitations on confidentiality” and ”foreseeable uses of the information generated” with research subjects (p. 9). It is recommended that information of this kind be provided at the ”outset of the relationship.” Sociologists are neither absolutely enjoined from disclosing information obtained under assurances of confidentiality nor given clear guidance about resolving pertinent conflicts. The Code of Ethics states:
Sociologists may confront unanticipated circumstances where they become aware of information that is clearly health- or life-threatening to research participants, students, employees, clients, or others. In these cases, sociologists balance the importance of guarantees of confidentiality with other priorities in [the] Code of Ethics, standards of conduct, and applicable law. (p. 9)
The section on informed consent, the most extensive in the 1997 Code of Ethics, reflects a frequent dilemma among sociologists. The basic tenets of informed consent as stated here approximate those in all fields of science. Obtaining true consent requires eliminating any element of undue pressure (as might occur in the use of students as research subjects) or deception regarding the nature of the research or risks and benefits associated with participation. In social research, however, statement of the objectives of an investigation may affect attitudes and behavior among research subjects in a manner that undermines the validity of the research design. Recognizing this possibility, the Code acknowledges instances when deceptive techniques may be acceptable. These include cases where the use of deception ”will not be harmful to research participants,” is ”justified by the study’s prospective scientific, educational, or applied value,” and cannot be substituted for by alternative procedures (p. 12).
A review of historical developments, events, and controversies of special importance to sociologists in the decades preceding the 1989 and 1997 Codes of Ethics promotes a further appreciation of the concerns they embody. Perhaps the most far-reaching development in this era was the introduction of government funding into new areas of the sociological enterprise. In sociology, as in many areas of science, government funding provided opportunities to expand the scope and sophistication of research, but it created new ethical dilemmas and accentuated old ones.
Increased government funding created interrelated problems of independence for the sociological researcher and anonymity for the research subject. A report by Trend (1980) on work done under contract with the U.S. Department of Housing and Urban Development (HUD) illustrates one aspect of this problem. Possessing a legal right to audit HUD’s operations, the General Accounting Office (GAO) could have examined raw data complete with individual identifiers despite written assurances of confidentiality to the subjects by the research team. Sensitivity on the part of the GAO and creativity by the sociologists averted an involuntary though real ethical transgression in this instance. But the case illustrates both the importance of honoring commitments to subjects and the possibility that ethical responsibilities may clash with legal obligations.
Legal provisions designed explicitly to protect human subjects emerged in the 1970s. Regulations developed by the U.S. Department of Health and Human Services (DHHS) require that universities, laboratories, and other organizations requesting funds establish institutional review boards (IRBs) for protection of human subjects. The 1997 ASA Code of Ethics makes frequent reference to these boards as a resource for resolution of ethical dilemmas.
Sociologists, however, have not always expressed confidence in the contributions of IRBs. One commentary (Hessler and Freerks 1995) argues that IRBs are subject to great variability in protecting the rights of research subjects at the local level. Others contend that deliberations of these boards take place in the absence of appropriate standards or methods of analysis. The expertise and concerns of IRBs may not apply well to actual risks posed by sociological research methods. Biomedical research, the primary business of most IRBs, potentially poses risks of physical injury or death to the research subject. Except in extraordinary circumstances, sociological techniques expose subjects at worst to risks of embarrassment or transient emotional disturbance. IRB requirements often seem inappropriate or irrelevant to sociology. In the words of one commentator, the requirement by IRBs that researchers predict adverse consequences of proposed studies encourages sociologists to engage in exercises of ”futility, creativity, or mendacity” (Wax and Cassell 1981, p. 226).
Several instances of highly controversial research have helped frame discussion of ethics among sociologists. Perhaps most famous is the work of Stanley Milgram (1963), who led subjects to believe (erroneously) that they were inflicting severe pain on others in a laboratory situation. This experiment, which revealed much about the individual’s susceptibility to direction by authority figures, was said by some to present risk of emotional trauma to subjects. Milgram’s procedure itself seemed to duplicate the manipulative techniques of authoritarian dictators. Distaste among sociologists for Milgrom’s procedure helped crystallize sentiment in favor of public and professional scrutiny of research ethics.
The Vietnam era saw increasing suspicion among sociologists that government might use its expertise to manipulate populations both at home and abroad. A seminal event during this period was the controversy over a U.S. Army-funded research effort known as Project Camelot. According to one commentator, Project Camelot aimed at ascertaining ”the conditions that might lead to armed insurrections in . . . developing countries so as to enable United States authorities to help friendly governments eliminate the causes of such insurrections or to deal with them should they occur” (Davison 1967, p. 397). Critical scrutiny by scholars, diplomats, and congressional committees led to cancellation of the project. But provisions in the1989 Code of Ethics on disclosure and possible impacts of research clearly reflect its influence.
The end of the Cold War and increasing litigiousness among Americans may help explain the shift in emphasis between the 1989 and 1997 ASA Codes of Ethics. As noted above, the later Code appears to emphasize ethical issues facing sociologists as individuals rather than as potential tools of government and big business. Many sociologists have stories to tell about actual or potential encounters with the legal system over the confidentiality of data obtained from research subjects. The visibility and frequency of such encounters may have helped shape the 1997 Code’s section on confidentiality.
The most celebrated confrontation of a sociologist with the law involved Rik Scarce, who was incarcerated for 159 days for refusing to testify before a grand jury investigating his research subjects. Scarce’s case is described by Erikson (1995):
Scarce found himself in an awful predicament. He was engaged in research that rested on interviews with environmental activists, among them members of the Animal Liberation Front. One of his research subjects came under investigation in connection with a raid on a local campus, and Scarce was ordered to appear before a grand jury investigation. He refused to answer questions put to him, was found to be in contempt, and was jailed for more than five months.
Some evidence suggests that the institutional structure surrounding social research has proven an uncertain asset in personal resolution of ethical issues such as Scarce’s. The 1997 ASA Code of Ethics advises sociologists confronting dilemmas regarding informed consent to seek advice and approval from institutional review boards or other ”authoritative [bodies] with expertise in the ethics of research.” But IRBs typically serve as reviewers of research plans rather than consultative bodies regarding issues encountered in execution of research; the phrase ”authoritative [bodies] with expertise in the ethics of research” has a vague ring. Lee Clark’s (1995) description of his search for guidance in responding to a law firm’s request for his research notes illustrates the limitations of IRBs and related individuals and agencies:
. . . I talked with first amendment attorneys, who said academic researchers don’t enjoy journalists’ protections. . . . I was told that if I destroyed the documents, when there was reason to expect a subpoena, then I would be held in contempt of court. I talked with ASA officials and the chair of ASA’s Ethics Committee, all sympathetic but unable to promise money for an attorney. They were equally certain of my obligations according to the Ethics Code. … I talked with lawyers from Stony Brook [where Clark had performed his research], who told me that the institution would not help. Lawyers for Rutgers, where I was . . . employed, said they wouldn’t help either.
In all human activity, individuals ultimately face ethical issues capable of resolution only through personal choice among alternatives. But increasingly, sociologists seem to face these choices unaided by distinct guidelines from their profession. This default to personal responsibility derives in part from the ambiguity in two philosophical principles widely encountered in sociological discourse, utilitarianism and moral relativism.
As an ethical principle, utilitarianism seems to provide a convenient rule ofr making decisions. The prevailing morality among modern cosmopolitans, utilitarianism applies the principle of the greatest net gain to society in deciding questions of research ethics. This perspective places emphasis on degrees of risk or magnitude of harm that might result from a given research effort. Under this perspective, Project Camelot (cited above) may have deserved a more favorable reception. Davison (1967) suggests that completion of the project would probably not have caused appreciable harm. He comments:
If past experience is any guide, it would have contributed to our knowledge about developing societies, it would have enriched the literature, but its effects on this country’s international relations would probably have been tangential and indirect. (p. 399) Several well-known and ethically controversial studies may be justified on utilitarian grounds. Among the best known is Laud Humphreys’s study of impersonal sex in public places (1975). Humphreys gained access to the secret world of male homosexuals seeking contacts in public restrooms by volunteering his services as a lookout. Despite its obvious deception, Humphreys’s work received the support of several homophile organizations (Warwick 1973, p. 57), in part because it illustrated the prevalence of sexual preferences widely considered abnormal. In his study of mental institutions, Rosenhan (1973) placed normal (i.e., nonpsychotic) observers in psychiatric wards without the knowledge or consent of most staff. His study generated highly useful information on the imperfections of care in these institutions, but the deception and manipulation of his subjects (hospital staff) are undeniable.
As a rule for making decisions, though, utilitarianism presents both practical and conceptual problems. Bok (1978) points out the difficulty in estimating risks of harm (as well as benefits) from any research activity. The subtle and uncertain impacts of sociological research techniques (as well as associated findings) make prospective assessment of utilitarian trade-offs extremely problematical. Many traditional ethical constructs, moreover, contradict utilitarianism, implying that acts must be assessed on the basis of accountability to abstract principles and values (e.g., religious ones) rather than the practical consequences of the acts themselves.
Moral relativism provides some direction to the uncertainty implicit in utilitarianism. This principle assumes that ”there are no hard and fast rules about what is right and what is wrong in all settings and situations” (Leo 1995). Under this principle, ethical judgment applies to ends as well as means. Moral relativism might provide ethical justification for a sociologist who, believing that the public requires greater knowledge of clandestine police practices, misrepresents his personal beliefs or interests in order to observe these practices. The very relativism of this principal, however, invites controversy.
The 1997 ASA Code of Ethics restates the profession’s fundamental ethic as striving to ”contribute to the public good” and to ”respect the rights, dignity, and worth of all people” (p. 4). Regarding research activity, the Code places primary emphasis on informed consent, protection of subjects from harm, confidentiality, and disclosure of conflicts of interests. But the Code, the institutional milieu of sociology, and the practical conditions under which sociological research takes place preclude strong direction for individuals in the ethical dilemmas they encounter.