ABSTRACT
In the 1970s, the American medical profession was confronted with the fact that the
internal norms that had guided its practice for centuries were ill-suited to the demands of
participatory democracy. As is well-documented in texts on the emergence of the
contemporary field of bioethics (1,2), the medical professions’ paternalistic attitude
regarding what was best for patients in terms of both the information and the interventions
they received began to be supplanted in the 1960s and 1970s by the call for the medical
profession to be accountable to democratic norms of self-determination, transparency, and
justice. This demand reflected the broader calls for democratic reform that defined the era:
civil rights for African-Americans, women, and others who were marginalized in
American society; more open and democratic practices by government agencies charged
with protecting public health and the environment; and international recognition of the
need for regulations governing the use of humans in medical experimentation. In 1948, in
response to the war crimes tribunals against Nazi doctors, the Nuremberg Code articulated
as its first principle that “the voluntary consent of the human subject is absolutely
essential” (3). It took decades before the U.S. codified this principle into law-decades
during which egregious violations of the principle were exposed in the Tuskegee
experiments and a host of other research on vulnerable groups such as the terminally ill,
retarded children, the poor, the institutionalized elderly, and military recruits. By the late
1970s a national commission on the protection of human research subjects had articulated
the ethical principles that would be formalized in regulations governing the growing
enterprise of federally-funded research. These principles were manifested in the
requirements of informed consent, the justifiability of research risks, and fairness in the
selection of subjects.