Download INTRODUCTION: ANXIOUS BORDERS between Work and Life in a Time of Bureaucratic Ethics Regulation (American Ethnologist 2006)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
RENA LEDERMAN
Princeton University
Introduction:
Anxious borders between work and life in a time of
bureaucratic ethics regulation
I N T R O D U C T I O N
ver the past few decades, anthropologists have reassessed the
value of fieldwork “at home,” in particular, and reviewed the disciplinary status of field practice, in general—in both cases from various angles—in light of the shifting contemporary circumstances
and opportunities for ethnographic work. The articles in this AE
Forum extend that scrutiny by focusing attention on informality in ethnographic fieldwork both at home and away. Informality here refers to the undemarcated moments of ethnographic practice when “research” and “daily
life” are inextricable. Informality is a prominent feature of partly or wholly
unfunded (self-funded) research pursued part-time at home; it is also evident, if less notable, perhaps, in funded research anywhere, insofar as “doing
fieldwork” is all about embedding oneself in ongoing social situations not
designed by the investigator. Although informality is a long-standing, acknowledged fact in ethnographic sociology—in which unfunded research at
home is the norm—it is as yet largely unremarked in anthropology despite
being a common corollary of the trend to home-based fieldwork.
In both disciplines, informality demands explicit attention because it has
recently become problematized in new ways in the United States. For over
three decades, institutional review boards (IRBs) have overseen researchers’
compliance with U.S. federal regulations concerning ethical “human subjects” research. During the past five or six years, however, federal regulators have been on high alert, a stance passed along to local IRBs as an increasing fear of legal and financial consequences to their home institutions
should they slacken efforts to fulfill their internal monitoring responsibilities. Federal regulations presuppose a research process whose locations,
time frames, personnel, and protocols are all clearly demarcated (in funding
proposals, typically) such that research may be clearly distinguished from a
scholar’s “ordinary life,” in which other ethical guarantees and constraints
(like the First Amendment and libel laws) apply. Against this presumption of clearly bracketed research, informal fieldwork blurs the work–life
distinction.
Consequently, ethnographers working informally in the United
States have encountered ethical, political, and regulatory ambiguities,
O
AMERICAN ETHNOLOGIST, Vol. 33, No. 4, pp. 477–481, ISSN 0094-0496, electronic
C 2006 by the American Anthropological Association. All rights reserved.
ISSN 1548-1425. Please direct all requests for permission to photocopy or reproduce article content through
the University of California Press’s Rights and Permissions website, www.ucpress.edu/journals/
rights.htm.
American Ethnologist
Volume 33 Number 4 November 2006
administrative frustrations, and, occasionally, serious trouble. The articles in this AE Forum provide several case studies;
explore some of the historical, structural, and political contexts; and offer suggestions for action—as openings for discussion of the regulatory problems associated with informal
research. They take a sounding of anthropological and sociological experience with research that flies under the radar,
and with researcher identities that slide among subject positions, along the way identifying some if its contradictory
ethical–political entailments and raising both epistemological and practical questions.
Background
Although the articles that follow provide important background and offer guidance for further reading, an initial
historical overview may help. A response to revelations
concerning Nazi medical experimentation, the Nuremberg
Code is arguably the foundational text for subsequent international agreements concerning research involving human beings, notably, the 1964 Declaration of Helsinki
(World Medical Association 2002). The Helsinki declaration elaborated a set of universalistic ethical principles concerning the treatment of medical-research participants—
including informed consent, confidentiality, and protection
from harm—that have been integrated into many national
and funding-agency research ethics codes.
The history of the current human-subject protections
system in the United States is well documented (see the history module of the National Institutes of Health [NIH] online
ethics-certification course [National Cancer Institute n.d.],
which has links to the major national and international documents; and see, e.g., Faden and Beauchamp 1986 and citations elsewhere in this AE Forum). Although it was not
the first such exposé (McCarthy 2001), Henry Beecher’s 1966
New England Journal of Medicine article documenting scores
of horrifying U.S. medical-research abuses prompted the
NIH to develop its Policies for the Protection of Human Subjects that year. In 1972, media exposure of the Public Health
Service’s Tuskegee syphilis study—which had been regularly
reapproved over its 40-year duration (Jones 1993)—led to
passage of the National Research Act of 1974. This act raised
the NIH guidelines to federal regulatory status when they
were adopted by the Department of Health, Education, and
Welfare (now the Department of Health and Human Services
[DHHS], of which NIH is part) as “Code of Federal Regulations, Title 45 ‘Public Welfare,’ Part 46 ‘Protection of Human
Subjects’ ” (DHHS 2005). The 45 CFR 46 regulations became
known as the “Common Rule” in 1991, when 16 other federal agencies that fund human-subjects research signed on
(each using its own regulatory reference code).
As specified in 45 CFR 46.103, each institution that plans
to accept federal research funding must file an “assurance”
with the federal agency providing the funds or else have a
478
Federalwide Assurance (FWA) on file with the NIH–DHHS
Office for the Protection from Research Risks (see, esp.,
Shweder this issue). This document obligates those institutions to set up one or more IRBs (often referred to as
“human-subjects committees”), and has several other procedural provisions (DHHS 2005:103[b]). Notably, IRBs are
mandated to perform prospective reviews of research plans,
and their judgments are final (there is no appeal process).
Research ethics regulation in the United States may be
unique for its relatively long history compared with that at
the national (government or funding-agency) level in other
countries. The United States is not unique for orienting its
research ethics regulations around biomedicine, however. A
cursory review of position papers from the European Union
(e.g., see Lewis et al. [2004] on European Codes of Practice
2002), the United Kingdom (see, e.g., University of York and
Oxford Brookes University 2003–05 for Economic and Social Research Council documents and many useful links),
Canada (Jorgensen n.d.; Panel on Research Ethics [PRE]
2004), Australia (Chalmers 2001), and elsewhere suggest that
biomedical research risks in the context of global inequalities
have been the main motive and focus for the development of
ethics codes and guidelines by funders, professional associations, and national agencies. On account of this widespread
original motive and focus, developing ethical rubrics appropriate to social-science and humanities work is a pervasive
contemporary concern. Needless to say, the structural and
political challenges faced by social researchers around the
world are not monolithic.
For example, there is wide variation in the European
Union (such that a key contemporary project aims for “the
harmonization” of diverse funding-agency, professionalassociation, and national research ethics codes); however,
at the same time, “there are no formal ethics approval procedures and issues such as informed consent, confidentiality, and dissemination of information to research participants are not reviewed at the funding stage” (Lewis et al.
2004:5). Canada’s recently published advisory guidelines for
social research (PRE 2004)—aimed at changing the provisions of the Tri-Council Policy Statement that governs the
practices of Canada’s three main funding agencies—are exemplary in their recognition of the “spectrum” of disciplinary
methodologies and ethical frameworks; their sensitivity to
the needs of fields that, as those advisory guidelines put it,
do not adopt “paradigmatic/positivist/experimentalist assumptions”; and their acknowledgment of the importance
of protecting both academic freedom and human research
participants. Still, “there is no national accreditation or compliance, surveillance or oversight system in place” (Lewis
et al. 2004) for research ethics boards (the Canadian equivalent of U.S. IRBs).
Against this background, one may consider the U.S. situation. It is important to note the strikingly accidental quality
of the U.S. system of human-subjects protection regulations,
Introduction: Anxious borders between work and life
in particular its origin as an incremental reworking of narrowly medical NIH guidelines rather than as a more deliberately designed system developed on the basis of systematic,
broad-based consultations with representatives of the disciplines to be regulated (see, e.g., Pattullo 1982, 1984). Because of this particular history, the code has been a problem
for social researchers from inception (i.e., the present hue
and cry recapitulates protests that were already sharp and
sophisticated in the 1970s). Inconsistencies in the U.S. regulations likely reflect their cut-and-paste history, whereas,
elsewhere, human-research ethics codes may have emerged
from more apparently systematic processes without necessarily happier results. In Britain, for example, one needs to
take account of the generally fraught situation of universities
and research bodies, in which reductive accountability standards, modeled on the financial audit, have to be continually
accommodated or resisted (Strathern 2000).
For U.S. social research, a defining moment came in
1979, when a national commission, established by the National Research Act of 1974 to develop policy and procedures,
published a statement of general ethical principles known
as the “Belmont Report” (National Commission for the Protection of Human Subjects of Biomedical and Behavioral
Research 1979). The Belmont principles—“justice,” “beneficence,” and “respect for persons”—still guide the federal
system and inform basic IRB procedures: respectively, the
scrutiny of research-subject selection; harm–benefit calculi;
and informed consent and provisions for preserving confidentiality. Nevertheless, regulatory revisions also issued in
1979—which expanded 45 CFR 46 application to all research
regardless of funding—provoked months of heated criticism
by social scientists, and new revisions were published in 1981
(see, esp., Pattullo 1982; Tropp 1982). Confusion over the
resulting tangle of rulings prompted the writing of a booklength IRB Guidebook in the early 1980s (Penslar with Porter
1993), to no effect.
One wonders nowadays whether history is repeating
itself. Ever since 45 CFR 46 became the “Common Rule”
in 1991, various national and presidential study commissions have been convened, hearings held, and reports filed
in efforts to rework the system. The most notable of the
commissions, the National Bioethics Advisory Commission
(NBAC), held hearings over a six-year period and, just a few
years go, issued its final policy recommendations for an
overhaul of the federal system (NBAC 2001). As the NBAC
was completing its work, however, a series of medical research tragedies at major research universities, such as the
University of Pennsylvania and Johns Hopkins University,
provoked an upsurge of regulatory hypervigilance, both nationally and by local IRBs, and a reactive expansion of regulatory oversight—IRB “mission creep”—into new research
areas decidedly distant from those for which the regulations
were designed. All of this has reawakened concern along a
wide spectrum of affected parties—including not only so-
American Ethnologist
cial scientists but also literature professors, oral historians,
lawyers, and journalists—and prompted a round of meetings
and position papers by individuals and professional associations (e.g., American Association of University Professors
2001, 2002; Gunsalus et al. 2005) and the beginnings of a
major rethinking of the legal basis for the regulatory system
as a whole (e.g., Hamburger 2005).
AE Forum articles
The collective focus of the articles in this AE Forum is relatively narrow, concerned with the U.S. situation, in general,
and with problems regarding IRB oversight of ethnography,
in particular.
First, although the set of articles does not address the
issue directly, there is no doubt about the importance of attending to differences, similarities, and mutual influences
among cross-national discourses on ethics codes, regulation, and accountability. Anthropological and other kinds of
social research crosses national borders, of course, and may
involve cross-national collaborations, all of which necessitates an awareness of the politics of diverse national regulatory situations. The AE Forum commentary accompanying
these articles provides a glimpse of that global canvas.
A comparative framework would help clarify the specificity of the U.S. case. Along with a better understanding of
the political dynamics of funding, access, and corporatization in U.S. educational institutions, a comparative perspective might both sharpen and nuance the rapidly emerging
national debate about conflicts between the present system
for ensuring ethical human-research practice, on the one
hand, and, on the other hand, First Amendment rights of
social researchers as citizens and related principles of academic freedom that protect critical research so central to the
mission of the nation’s colleges and universities. This is an
important focus most especially of the articles by Jack Katz
and Richard A. Shweder. Useful comparisons may be drawn
between this emergent U.S. discourse and parallel Canadian
arguments over academic freedom in the regulation of social
and humanistic research.
Second, these articles pay special attention to qualities
of ethnographic fieldwork that set it apart from other kinds
of social research. Without question, fieldworkers in anthropology, sociology, nursing, education, and other fields also
conduct systematic surveys and formal interviews: structured interactions in which the researcher works to control
contextual factors like topical frames and sequences. But
participant-observation is distinctive for placing contextual
control into the hands of research participants, a characteristic that makes it uniquely problematic for IRB regulation
(as even regulatory insiders have long recognized). Exaggerating the point, the articles zero in on partly or wholly selffunded research “at home,” in which the fieldwork–everyday
life distinction is maximally blurred.
479
American Ethnologist
Volume 33 Number 4 November 2006
All of the anthropologists represented in this set of articles have previously done funded fieldwork outside the
United States—Daniel Bradburd in Iran, Rena Lederman in
Papua New Guinea, and Shweder in India—as well as the
work described here; sociologist Katz has conducted ethnographic work on urban neighborhoods, law, crime, emotions, and other topics in the United States. Reviewing the
regulatory situation to introduce our collective concerns,
Lederman reports the results of a comparative investigation
into disciplinary knowledge-making for which IRB discourse
is both content and context. She locates the grounds for
ethical–methodological miscommunication among practitioners from related disciplines and identifies presuppositions in the federal regulations that weigh particularly heavily on ethnographers. Bradburd provides contrastive cases
involving studies both of everyday attitudes and behavior
and of marginally legal practices. He uses these illustrative
examples of informal ethnographic work as opportunities to
reflect on the process of deciding whether, when, and how
to make the work known to his IRB.
Lederman and Bradburd suspect that their circumstances, experiences, and choices working with elites and
ordinary people of various sorts are not idiosyncratic. With
cases like these in mind, Katz offers an ambitious, detailed
analysis and critique of the impact of federal regulation on
fieldwork, paying particular attention to ethnographic sociology. Arguing that ethnographers have been driven underground by the ways that IRBs have come to interpret federal
regulations, he outlines a strategy of practical responses that
will enable researchers to be both ethical and aboveboard
about what they actually do. Finally, echoing Katz’s emphasis
on practical engagement, Shweder provides a focused discussion of the University of Chicago community’s approach
to its FWA, with implications for ethnographers elsewhere.
He centers attention on the troubling First Amendment impacts of current IRB assumptions, an issue that has been
difficult to raise in the wake of anxieties over researcher
abuses.
The articles’ emphases complement one another. We
hope that, together with AE Forum commentary, they
productively inform, provoke, and refresh anthropological
discussion.
[institutional review boards (IRB), mission creep, ethnography, fieldwork, research ethics, audit, unfunded research, informality]
References cited
American Association of University Professors
2001 Protecting Human Beings: Institutional Review Boards
and Social Science Research. Academe 87(3). Electronic document, http://www.aaup.org/statements/Redbook/repirb.htm,
accessed August 2, 2003.
480
2002 Should All Disciplines Be Subject to the Common Rule?
Human Subjects of Social Science Research. Academe 88(3).
Electronic document, http://www.aaup.org/publications/
Academe/2002/02mj/02mjftr.htm, accessed August 2,
2003.
Beecher, Henry K.
1966 Ethics and Clinical Research. New England Journal of
Medicine 274(24):1354–1360.
Chalmers, Donald
2001 Research Ethics in Australia. In National Bioethics Advisory Commission, Ethical and Policy Issues in Research Involving Human Participants, vol. 2: Commissioned Papers and Staff
Analysis. Pp. A1–A66. Bethesda, MD: NBAC.
Department of Health and Human Services (DHHS)
2005 Code of Federal Regulations, Title 45 “Public Welfare,”
Part 46 “Protection of Human Subjects.” (Revised 6/23/05, effective 6/23/05). Electronic document, http://www.hhs.gov/
ohrp/humansubjects/guidance/45cfr46.htm, accessed June
12, 2006.
Faden, Ruth, and Tom L. Beauchamp
1986 A History and Theory of Informed Consent. Oxford: Oxford
University Press.
Gunsalus, C. K., Edward Bruner, Nicholas C. Burbules, Leon Dash,
Matthew Finkin, Joseph P. Goldberg, William T. Greenough,
Gregory A. Miller, Michael G. Pratt, and Deb Aronson
2005 Improving the System for Protecting Human Subjects:
Counteracting IRB “Mission Creep.” The Illinois White Paper.
University of Illinois at Urbana–Champaign, College of Law.
Electronic document, http://www.law.uiuc.edu/conferences/
whitepaper, accessed November 1, 2005.
Hamburger, Philip
2005 The New Censorship: Institutional Review Boards.
Supreme Court Review, October 2004 Term:271–354.
Jones, James H.
1993 Bad Blood: The Tuskegee Syphilis Experiment. New York:
Free Press.
Jorgensen, Dan
N.d. Response to the Joint Committee Report on the Application of the Tri-Council Policy Statement on Ethical Conduct for
Research Involving Humans. Unpublished MS, Department of
Anthropology, University of Western Ontario.
Lewis, Graham, Mary Boulton, Nik Brown, and Andrew Webster
2004 The International Dimension to Research Ethics: The Significance of International and Other Non-UK Frameworks for
UK Social Science. Discussion Paper, 2. Economic and Social
Research Council Research Ethics Framework. Electronic document, http://www.york.ac.uk/res/ref/docs/REFpaper2 v2.pdf,
accessed May 1, 2006.
McCarthy, Charles R.
2001 Reflections on the Organizational Locus of the Office for
Protection from Research Risks. In National Bioethics Advisory
Commission, Ethical and Policy Issues in Research Involving
Human Participants, vol. 2: Commissioned Papers and Staff
Analysis. Pp. H1–H28. Bethesda, MD: NBAC.
National Bioethics Advisory Commission (NBAC)
2001 Ethical and Policy Issues in Research Involving Human Participants, vol. 1: Report and Recommendations. Bethesda, MD:
NBAC.
National Cancer Institute
N.d. Human Participant Protections Education for Research
Teams.
Electronic
document,
http://cme.cancer.gov/
clinicaltrials/learning/humanparticipant-protections.asp,
accessed September 30, 2005.
National Commission for the Protection of Human Subjects of
Biomedical and Behavioral Research
Introduction: Anxious borders between work and life
1979 Belmont Report: Ethical Principles and Guidelines for the
Protection of Human Subjects of Research. Washington, DC:
U.S. Government Printing Office.
Panel on Research Ethics
2004 Giving Voice to the Spectrum. Report of the Social Sciences
and Humanities Research Ethics Special Working Committee
to the Interagency Advisory Panel on Research Ethics. Ottawa:
Interagency Advisory Panel and Secretariat on Research Ethics.
Pattullo, E. L.
1982 Modesty Is the Best Policy: The Federal Role in Social Research. In Ethical Issues in Social Science Research.
Thomas Beauchamp, Ruth R. Faden, LeRoy Walters, and R. Jay
Wallace, eds. Pp. 373–390. Baltimore: Johns Hopkins University
Press.
1984 Institutional Review Boards and Social Research: A Disruptive, Subjective Perspective, Retrospective and Prospective. In
NIH Readings on the Protection of Human Subjects in Behavioral and Social Science Research: Conference Proceedings and
Background Papers. Joan E. Sieber, ed. Pp. 10–17. Frederick, MD:
University Publications of America.
Penslar, Robin Levin, with Joan P. Porter
1993 Office for Human Research Protections (OHRP) IRB
Guidebook. Electronic document, http://www.hhs.gov/ohrp/
irb/irb guidebook.htm, accessed August 3, 2003.
American Ethnologist
Strathern, Marilyn
2000 Audit Cultures: Anthropological Studies in Accountability,
Ethics and the Academy. London: Routledge.
Tropp, Richard A.
1982 A Regulatory Perspective on Social Science Research.
In Ethical Issues in Social Science Research. Thomas
Beauchamp, Ruth R. Faden, LeRoy Walters, and R. Jay
Wallace, eds. Pp. 391–415. Baltimore: Johns Hopkins University
Press.
University of York and Oxford Brookes University
2003–05 Developing a Framework for Social Science Research
Ethics. Electronic document, http://www.york.ac.uk/res/ref,
accessed May 1, 2005.
World Medical Association
2002[1964] Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. Electronic document, http://www.wma.net/e/policy/b3.htm, accessed August
3, 2003.
Rena Lederman
Department of Anthropology
Princeton University
Princeton, NJ 08544
[email protected]
481