Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-28T05:56:54.582Z Has data issue: false hasContentIssue false

In Plain Sight: A Solution to a Fundamental Challenge in Human Research

Published online by Cambridge University Press:  01 January 2021

Extract

The physician-researcher conflict of interest, a long-standing and widely recognized ethical challenge of clinical research, has thus far eluded satisfactory solution. The conflict is fairly straightforward. Medical research and medical therapy are distinct pursuits; the former is aimed at producing generalizable knowledge for the benefit of future patients, whereas the latter is aimed at addressing the individualized medical needs of a particular patient. When the physician-researcher combines these pursuits, he or she serves two masters and cannot — no matter how well-intentioned — avoid the risk of compromising the duties owed in one of the professional roles assumed. Because of the necessary rigidity of a research protocol, the more demanding of the two masters is frequently the research.

The problem of the physician-researcher conflict has been evident since the first attempts to regulate human research in the United States. Otto E. Guttentag, a physician at the University of California School of Medicine in San Francisco, addressed the conflict in a 1953 Science magazine article.

Type
Symposium
Copyright
Copyright © American Society of Law, Medicine and Ethics 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Guttentag, O. E., “The Problem of Experimentation with Human Beings: The Physician's Point of View,” Science 117, no. 3035 (1953): 207210.Google Scholar
Id., at 208.Google Scholar
Id., at 209.Google Scholar
Following Beauchamp and Childress, we use the word “subjects” rather than “participants” to refer to those who enroll in research because “not all humans enrolled in research has chosen to enroll.” Beauchamp, T. L. and Childress, J. F., Principles of Biomedical Ethics, 6th ed. (New York: Oxford University Press, 2009): at 141, note 1. (While “[c]learly, there is no perfect term,” we also think that the term “subjects” rather than “participants” appropriately highlights the power imbalance between researchers and those enrolled. Id.)Google Scholar
Saver, R. S., “Is It Really All about the Money? Reconsidering Non-Financial Interests in Medical Research,” Journal of Law, Medicine & Ethics 40, no. 3 (2012): 467481.CrossRefGoogle Scholar
See generally, Glass, K. C. and Waring, D., “The Physician/Investigator's Obligation to Patients Participating in Research: The Case of Placebo Controlled Trials,” Journal of Law, Medicine & Ethics 33, no. 3 (2005): 575583.CrossRefGoogle Scholar
American Medical Association Council on Ethical and Judicial Affairs, Code of Medical Ethics, Opinion 2.07, Clinical Investigation, available at <http://www.ama-assn.org/ama/pub/physician-resources/medical-ethics/code-medical-ethics/opinion207.page?> (last visited December 6, 2012). The Opinion states: “In conducting clinical investigation, the investigator should demonstrate the same concern and caution for the welfare, safety, and comfort of the person involved as is required of a physician who is furnishing medical care to a patient independent of any clinical investigation.”+(last+visited+December+6,+2012).+The+Opinion+states:+“In+conducting+clinical+investigation,+the+investigator+should+demonstrate+the+same+concern+and+caution+for+the+welfare,+safety,+and+comfort+of+the+person+involved+as+is+required+of+a+physician+who+is+furnishing+medical+care+to+a+patient+independent+of+any+clinical+investigation.”>Google Scholar
World Medical Association, “Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects,” last updated October 2008, available at <http://www.wma.net/en/30publications/10policies/b3/17c.pdf> (last visited December 6, 2012). The Declaration states: “In medical research involving human subjects, the well-being of the individual research subject must take precedence over all other interests.”+(last+visited+December+6,+2012).+The+Declaration+states:+“In+medical+research+involving+human+subjects,+the+well-being+of+the+individual+research+subject+must+take+precedence+over+all+other+interests.”>Google Scholar
Miller, F. G. and Brody, H., “Clinical Equipoise and the Incoherence of Research Ethics,” Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine 32, no. 2 (2008): 151162.CrossRefGoogle Scholar
Id., at 156; see also Brody, H. and Miller, F. G., “The Clinician-Investigator: Unavoidable but Manageable Tension,” Kennedy Institute of Ethics Journal 13, no. 4 (2003): 329346 (“In research, the investigator cannot in good faith promise fidelity to doing what is best medically for the patient-subject.”).CrossRefGoogle Scholar
Morreim, E. H., “The Clinical Investigator as Fiduciary: Discarding a Misguided Idea,” Journal of Law, Medicine & Ethics 33, no. 3 (2005): 586598.CrossRefGoogle Scholar
Coleman, C. H. et al., The Ethics and Regulation of Research with Human Subjects (Newark: LexisNexis, 2005): at 502.Google Scholar
Morreim, E. H., “Taking a Lesson from the Lawyers: Defining and Addressing Conflict of Interest,” American Journal of Bioethics 11, no. 1 (2011): 3334.CrossRefGoogle Scholar
Davis, M. and Johnston, J., “Conflicts of Interest in Four Professions: A Comparative Analysis,” in Lo, B. and Field, M. J., eds., Conflict of Interest in Medical Research, Education, and Practice (2009): At App. C at 302, 304–305. According to Davis and Johnston, while the concept of divided loyalties is not a new one, the term “conflict of interest” appears to be relatively recent.Google Scholar
Id., at 305.Google Scholar
In a widely influential article, Emanuel, et al., have identified seven commonly accepted principles that make clinical research ethical: (1) informed consent; (2) a favorable risk/benefit ratio; (3) fair subject selection; (4) scientific or social value; (5) scientific validity; (6) independent review; and (7) respect for enrolled research participants. Emanuel, E. J. Wendler, D., and Grady, C., “What Makes Clinical Research Ethical?” JAMA 283, no. 20 (2000): 27012711.CrossRefGoogle Scholar
Janssen, L., “A Closer Look at the Bad Deal Trial: Beyond Clinical Equipoise,” Hastings Center Report 35, no. 5 (2005): 2936.Google Scholar
Menikoff, J. and Richards, E. P., What The Doctor Didn't Say: The Hidden Truth about Medical Research (New York: Oxford University Press, 2006): at 23. Jerry Menikoff is the Director of the Office for Human Research Protections (OHRP), created in the wake of the Tuskegee Study to interpret and enforce the federal regulations governing research.
King, N. M. P., “Defining and Describing Benefit Appropriately in Clinical Trials,” Journal of Law, Medicine & Ethics 28, no. 4 (2000): 332343, at 338 (writing about oncology trials: “at present, it appears that no Phase I trial can offer a reasonable chance of direct benefit – and that more information about the burdens of research participation and the alternatives of palliative and supportive care is imperative.”).CrossRefGoogle Scholar
Appelbaum, P. S. and Lidz, C. W., “Re-Evaluating the Therapeutic Misconception: Response to Miller and Joffe,” Kennedy Institute of Ethics Journal 16, no. 4 (2006): 367373.CrossRefGoogle Scholar
Hochhauser, M., “‘Therapeutic Misconception’ and ‘Recruiting Doublespeak’ in the Informed Consent Process,” IRB: Ethics and Human Research 24, no. 1 (2002): 1112.CrossRefGoogle Scholar
Churchill, L. R., “‘Damaged Humanity’: The Call for a Patient-Centered Ethic in the Managed Care Era,” Theoretical Medicine 18, no. 1-2 (1997): 113126 (noting skepticism that an obligation of undivided physician loyalty can be said to derive from the Hippocratic tradition).CrossRefGoogle Scholar
For the proposition that doctors have traditionally been understood as having an obligation to act in the best interest of their patients, see Wendler, D., “Are Physicians Always to Act in the Patient's Best Interests?” Journal of Medical Ethics 36, no. 2 (2010): 6670 (citing numerous sources for this view, including the American Medical Association, the International Code of Medical Ethics, the American College of Physicians, as well as ethics scholars). Identifying a number of well-recognized exceptions to this principle, Wendler argues instead for understanding doctors as having a pro tanto rather than strict obligation to put present patients' interests first, which would allow for exceptions based on compelling justifications.CrossRefGoogle Scholar
See Churchill, , supra note 25, at 122.Google Scholar
Emanuel, E. J. and Emanuel, L. L., “Four Models of the Physician-Patient Relationship,” JAMA 267, no. 6 (1992): 22212226.CrossRefGoogle Scholar
Id. Emanuel and Emanuel further posit that physicians should also, in dialogue with patients about the best course of action to take, promote health-related values and attempt to persuade the patient to take health-promoting actions.Google Scholar
See generally Advisory Committee on Human Radiation Experiments, The Final Report of the Advisory Committee on Human Radiation Experiments, 79–85 (1996) [hereinafter ACHRE]; Starr, P., The Social Transformation of American Medicine (Basic Books, 1984): at 352–363; Rothman, D., Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decisionmaking (New Brunswick: Aldine Transactions, 2003): at 51–69.Google Scholar
Id. (ACHRE), at 80.Google Scholar
Jonathan Moreno writes: “[T]he world of clinical studies from the late 1940s up through the mid-1960s was one in which a weak form of protectionism prevailed, one defined by the placement of responsibility upon the individual researcher. Written informed consent (through forms generally labeled ‘permits,’ ‘releases,’ or ‘waivers’), though apparently well established in surgery and radiology, was not a common practice in clinical research….” Moreno, J. D., National Bioethics Advisory Commission, Protectionism in Research Involving Human Subjects, Ethical and Policy Issues in Research Involving Human Participants, I-6 (August 2001), at 2.Google Scholar
Beecher, H. K., “Ethics and Clinical Research,” New England Journal of Medicine 274, no. 24 (1966): 13541360, at 1354–1355.CrossRefGoogle Scholar
Id., at 1355.Google Scholar
Id., at 1360.Google Scholar
The general federal regulations governing research, known as the “Common Rule,” do not actually require ethical training for individual researchers, but OHRP strongly recommends such training as part of an institutional Federalwide Assurance. U.S. Department of Health & Human Services (HHS), “Must Investigators Obtain Training in the Protection of Human Subjects?” available at <http://answers.hhs.gov/ohrp/questions/7224> (last reviewed December 6, 2012). In addition, NIH requires periodic training in research ethics for individuals receiving NIH funds. Office of Extramural Research, National Institutes of Health, “Frequently Asked Questions: Human Subjects Research – Requirement for Education,” available at <http://grants.nih.gov/grants/policy/hs_educ_faq.htm> (last visited December 6, 2012).+(last+reviewed+December+6,+2012).+In+addition,+NIH+requires+periodic+training+in+research+ethics+for+individuals+receiving+NIH+funds.+Office+of+Extramural+Research,+National+Institutes+of+Health,+“Frequently+Asked+Questions:+Human+Subjects+Research+–+Requirement+for+Education,”+available+at++(last+visited+December+6,+2012).>Google Scholar
Federal regulations governing human subjects research were first proposed in 1978 by the Department of Health, Education, and Welfare (DHEW). The final rules were put into place in 1981 by DHEW's successor agency, the Department of Health and Human Services (HHS). In 1991, most federal agencies involved in supporting biomedical research adopted the HHS regulations, thus establishing a “Common Rule” that applies to all of those agencies. FDA “concurs” with the Common Rule, but has not adopted it entirely. Williams, E. D., Congressional Research Service (CRS), Federal Protection for Human Research Subjects: An Analysis of the Common Rule and Its Interactions with FDA Regulations and the HIPAA Privacy Rule 14–15, 65 (2005).Google Scholar
See Moreno supra note 32, at I-12.Google Scholar
Some of the researchers conducting this empirical research have recently applied it to medical contexts: Loewenstein, G. Sah, S., and Cain, D. M., “The Unintended Consequences of Conflict of Interest Disclosure,” JAMA 307, no. 7 (2012): 669670.CrossRefGoogle Scholar
Chugh, D. et al., “Bounded Ethicality as a Psychological Barrier to Recognizing Conflicts of Interest,” in Moore, D. A. et al. eds., Conflicts of Interest: Challenges and Solutions in Business, Law, Medicine, and Public Policy (New York: Cambridge University Press, 2005): 7495, at 81.CrossRefGoogle Scholar
Moore, D. A. and Loewenstein, G., “Self-Interest, Automaticity, and the Psychology of Conflict of Interest,” Social Justice Research 17, no. 2 (2004): 189202, at 189, 195; see also Thagard, P., “The Moral Psychology of Conflicts of Interest,” Journal of Applied Philosophy 24, no. 4 (2007): 367–380, at 373.CrossRefGoogle Scholar
Anchoring effects describe cognitive biases that people commonly use in decision making. In most situations, people do not rationally assess probabilities but rather rely on intuitive heuristics, or short cuts, that have proven individually or culturally effective. An anchoring effect occurs when people rely too much on a specific piece of information and then set relative probabilities based on that information. This effect was first described by Tversky, A. and Kahneman, D. in “Judgment Under Uncertainty: Heuristics And Biases,” Science 185, no. 4157 (1974): 11241130.CrossRefGoogle Scholar
Perkins, D. N., “Reasoning as It Is and Could Be: An Empirical Perspective,” in Topping, D. M. Crowell, D. C. and Kobayashi, V. N., eds., Thinking Across Cultures: The Third International Conference on Thinking (Hillsdale, New Jersey: Routledge, 1989): 175194; Brenner, L. Koehler, D. J., and Tversky, A., “On the Evaluation of One-Sided Evidence,” Journal of Behavioral Decision Making 9, no. 1 (1996): 59–70, cited in Moore, D. A. Tanlu, L., and Bazerman, M. H., “Conflict of Interest and the Intrusion of Bias,” Judgment and Decision Making 5, no. 1 (2010): 37–53.Google Scholar
See Moore, et al., supra note 41, at 43.CrossRefGoogle Scholar
Id., at 42–47.Google Scholar
Id., at 46.Google Scholar
Miller, M., “Phase I Cancer Trials: A Collusion of Misunderstanding,” Hastings Center Report 30, no. 4 (2000): 3443, at 41.CrossRefGoogle Scholar
The National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, Report and Recommendations: Institutional Review Boards (1978), at 20, 79.Google Scholar
Jonsen, A., The Birth of the Belmont Report, Presentation at the National Bioethics Advisory Committee meeting (July 14, 1998), available at <http://govinfo.library.unt.edu/nbac/transcripts/jul98/belmont.html> (last visited December 6, 2012); Katz, J. et al., Experimentation with Human Beings: The Authority of the Investigator, Subject, Professions and State in the Human Experimentation Process, Russell Sage Foundation (1973).Google Scholar
Arras, J. D., “Case Study: The Jewish Chronic Disease Hospital Case,” in Emanuel, E. et al., eds., Oxford Textbook of Research Ethics (New York: Oxford University Press, 2008): At 73.Google Scholar
Jones, J. H., “Case Study: The Tuskegee Syphilis Experiment,” in Emanuel, E. et al., eds., Oxford Textbook of Research Ethics (New York: Oxford University Press, 2008): At 86.Google Scholar
Levine, R. J., Ethics and Regulation of Clinical Research, 2nd ed.(New Haven: Yale University Press, 1988): at 63; see also Report and Recommendations: Institutional Review Boards, supra note 48, at 24–25.Google ScholarPubMed
See Report and Recommendations: Institutional Review Boards, supra note 48, at 24–25.Google Scholar
45 C.F.R § 46.111 (2009).CrossRefGoogle Scholar
Faden, R. R. Beauchamp, T. L., and King, N. P., A History and Theory of Informed Consent (New York: Oxford University Press, 1986): at 59. See Ben-Shar, O. and Schneider, C. E., “The Failure of Mandated Disclosure,” University of Pennsylvania Law Review 159, no. 3 (2011): 647–749 (critiquing mandated disclosures generally, but noting specifically that they cannot replace expert judgment: “Reaching a good decision is not an exercise in analytic logic or an application of a simple theory. Rather, a good decision results from technical experience and savvy, gained by practice, trial-and-error, and educated intuition – factors that cannot be passed along by simple communication, reading a treatise, or signing a disclosure.”) at 726.Google Scholar
Even Haavi Morreim, who clearly understands many of the problems created by the physician-researcher's two roles, tends to see the solution as better protection through better understanding. Morreim, E. H., “The Clinical Investigator as Fiduciary: Discarding a Misguided Idea,” Journal of Law, Medicine & Ethics 33, no. 3 (2005): 586598, at 594.CrossRefGoogle Scholar
Appelbaum, P. S. Roth, L. H., and Lidz, C., “The Therapeutic Misconception: Informed Consent in Psychiatric Research,” International Journal of Law & Psychiatry 5, nos. 3–4 (1982): 319329, at 321; Lidz, C. W. Appelbaum, P. S. Grisso, T., and Renaud, M., “Therapeutic Misconception and the Appreciation of Risks in Clinical Trials,” Social Science & Medicine 58, no. 9 (2004): 1689–1697, 1691.CrossRefGoogle Scholar
See Lidz, et al., supra note 57.Google Scholar
Cox, K. and Avis, M., “Psychosocial Aspects of Participation in Early Anticancer Drug Trials: Report of a Pilot Study,” Cancer Nursing 19, no. 3 (1996): 177186, at 181.CrossRefGoogle Scholar
For a summary of some of the recent empirical evidence of the therapeutic misconception, see Dresser, R., “The Ubiquity and Utility of the Therapeutic Misconception,” Social Philosophy & Policy 19, no. 2 (2002): 271294, at 274–275.CrossRefGoogle Scholar
Henderson, G. E. Easter, M. M. Zimmer, C. King, N. M. P. Davis, A. M. Rothschild, B. B. Churchill, L. R. Wilfond, B. S., and Nelson, D. K., “Therapeutic Misconception in Early Phase Gene Transfer Trials,” Social Science & Medicine 62, no. 1 (2006): 239253.CrossRefGoogle Scholar
Id., at 251.Google Scholar
See King, , supra note 22.Google Scholar
Cain, M.D. Loewenstein, G., and Moore, D. A., “The Dirt On Coming Clean: Perverse Effects Of Disclosing Conflicts Of Interest,” Journal of Legal Studies 34, no. 1 (2005): 125.CrossRefGoogle Scholar
Id., at 17.Google Scholar
Id., at 18–20.Google Scholar
Id., at 20–21.Google Scholar
See ACHRE, supra note 30, at 469–475.Google Scholar
Id., at 475.Google Scholar
Institutional Review Boards (IRBs) are local committees that bear the primary responsibility for direct oversight of biomedical research with humans. They are usually, but not necessarily, associated with the institution conducting the research. 45 C.F.R. § 46.107–46.109, 46.111 (1991).Google Scholar
There is now a movement to provide single IRB review for multi-center studies.Google Scholar
45 C.F.R. § 46.111Google Scholar
See President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, Implementing Human Research Regulations 105–14 (1983); Emanuel, E. J. et al., “Oversight of Human Participants Research: Identifying Problems to Evaluate Reform Proposals,” Annals Internal Medicine 141, no. 4 (2004): 282291.CrossRefGoogle Scholar
45 C.F.R. § 46.111(a)(2) (1991) (emphasis added).Google Scholar
45 C.F.R. § 46.111(a)(1) (1991). None of this, of course, is news to anyone familiar with the conduct or regulation of clinical research. It may, however, surprise much of the lay public, especially as influenced by media outlets (including health care advertising) and even patient advocacy groups that portray enrollment in clinical trials as a means to access innovative and curative therapies.Google Scholar
See Menikoff, and Richards, , supra note 21, at 62. While not explicitly required by the Common Rule (45 C.F.R. Part 46) or any other regulations governing research, “clinical equipoise” is a commonly accepted ethical requirement in almost all clinical research (excepting, perhaps “low-risk” studies).Google Scholar
Freedman, B., “Equipoise and the Ethics of Clinical Research,” New England Journal of Medicine 317, no. 3 (1987): 141145; Emanuel, et al., supra note 18, at 2704.CrossRefGoogle Scholar
See Miller, and Brody, , supra note 9, at 155.Google Scholar
See Menikoff, and Richards, , supra note 21, at 63–64. Jerry Menikoff is the Director of the Office for Human Research Protections (OHRP), which was created in the wake of the Tuskegee Study to interpret and enforce the Common Rule.Google Scholar
Id., at 62–63. Deborah Hellman further explains that the concept of equipoise cannot justify the randomized clinical trial because the concept is directed at the question, what should I (the physician) believe? rather than what should I (the patient) do? Hellman, D., “Evidence, Belief, and Action: The Failure of Equipoise to Resolve the Ethical Tension in the Randomized Clinical Trial,” Journal of Law, Medicine & Ethics 30, no. 3 (2002): 375380.CrossRefGoogle Scholar
See King, , supra note 22, at 337.Google Scholar
See Menikoff, and Richards, , supra note 21, at 64–65 (describing the Clinical Outcomes of Surgical Therapy [COST] study).Google Scholar
Id. (citing Weeks, J.C. et al., “Short-term Quality-of-Life Outcomes Following Laparascopic-Assisted Colectomy vs. Open Colectomy for Colon Cancer,” JAMA 287, no. 3 (2002): 321328, at 327; American Society of Colon and Rectal Surgeons, “Approved Statement on Laparoscopic Colectomy,” Diseases of the Colon and Rectum 37, no. 6 (1994): At unnumbered page following page 637).CrossRefGoogle Scholar
See Menikoff, and Richards, , supra note 21. Moreover, when surveyed, colorectal surgeons overwhelmingly said they would not agree to the small-incision technique were they the patients faced with the need for surgery. Id. (citing Wexner, S. D. et al., “Laparoscopic Colorectal Surgery – Are We Being Honest with Our Patients?” Diseases of the Colon and Rectum 38, no. 7 [1995]: At 723 [reporting that only 6% of surgeons surveyed said they would agree to the new technique]).Google Scholar
See Menikoff, and Richards, , supra note 21.Google Scholar
Elliot, C., “The Deadly Corruption of Clinical Trials,” Mother Jones, July 20, 2011, available at <http://motherjones.com/environment/2010/09/dan-markingson-drug-trial-astrazeneca> (last visited December 5, 2012).+(last+visited+December+5,+2012).>Google Scholar
And they are ill-suited to do so, see Bankert, E. and Amdur, R., “The IRB Is Not a Data and Safety Monitoring Board,” IRB: Ethics and Human Research 22, no. 6 (2000): 911.CrossRefGoogle Scholar
These were originally (and often still are) called data and safety monitoring boards. We have adopted the broader term data monitoring committees here because that is the term used in current regulation.Google Scholar
45 CFR 46.111(a)(6) and NIH Policy for Data and Safety Monitoring, June 10, 1998, reaffirms 1979 NIH Guide, vol. 8, no. 8, June 5, 1979; FDA's formal regulations only require such monitoring in emergency studies, but guidance recommends a systematic approach in many research settings. FDA, Guidance for Clinical Trial Sponsors: Establishment and Operation of Clinical Trial Data Monitoring Committees (2006), available at <http://www.fda.gov/downloads/Regulatoryinformation/Guidances/ucm127073.pdf> (last visited June 8, 2012) [hereinafter cited as FDA Guidance]. (last visited June 8, 2012) [hereinafter cited as FDA Guidance].' href=https://scholar.google.com/scholar?q=45+CFR+46.111(a)(6)+and+NIH+Policy+for+Data+and+Safety+Monitoring,+June+10,+1998,+reaffirms+1979+NIH+Guide,+vol.+8,+no.+8,+June+5,+1979;+FDA's+formal+regulations+only+require+such+monitoring+in+emergency+studies,+but+guidance+recommends+a+systematic+approach+in+many+research+settings.+FDA,+Guidance+for+Clinical+Trial+Sponsors:+Establishment+and+Operation+of+Clinical+Trial+Data+Monitoring+Committees+(2006),+available+at++(last+visited+June+8,+2012)+[hereinafter+cited+as+FDA+Guidance].>Google Scholar
For a history of such committees, see the report of a 1992 NIH workshop: Ellenberg, S. Geller, N. Simon, R., and Yusuf, S., eds., Practical Issues in Data Monitoring of Clinical Trials (workshop proceedings), Statistics in Medicine 12, no. 5-6 (1993): 415616.Google Scholar
See FDA Guidance (2006), supra note 95, at 2.Google Scholar
Id., at 3.Google Scholar
See FDA Guidance (2006), supra note 95, at 3–4, specifying more exactly when a DMC is recommended.Google Scholar
FDA Draft Guidance, Guidance for Industry, Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring (2011), available at <http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM269919.pdf> (last visited December 5, 2012).+(last+visited+December+5,+2012).>Google Scholar
See FDA Guidance (2006), supra note 95, at 12.Google Scholar
Wilson, R. F., “Estate of Gelsinger v. Trustees of University of Pennsylvania,” in Johnson, S. H. et al., eds., Health Law and Bioethics: Cases in Context (New York: Aspen, 2009): At 229–254. Jesse Gelsinger was 18 years old in 1999 when he died in a highly experimental gene therapy transfer study at the University of Pennsylvania. While born with a rare genetic liver disease from which affected infants commonly die, Gelsinger's milder form of the disease had been receptive to management with medical care and diet. When he enrolled in the gene transfer therapy trial along with 17 others, he did so as a “healthy” research subject – in other words, with no expectation of therapeutic benefit. Within four days after being injected with the experimental gene delivery vector, Gelsinger suffered multi-organ failure and died. Why his body experienced a response so dramatically different from that of other study subjects is still not clear. A number of investigations, conducted internally, by the press, and by government agencies, ensued, many of them focused on the financial incentives that may have motivated both the investigators and the University to move forward with the trial prematurely or to continue it after ignoring signs of danger. The FDA concluded that adverse events experienced by other research subjects in the trial – though nothing like the severity Gelsinger experienced – had not been reported, and that if they had been, the trial would have been halted before Gelsinger received the gene transfer. In addition, the research protocol had under-gone various revisions without required FDA approval, and in any event, Gelsinger's serum ammonia did not fall within existing protocol guidelines at the time of the gene therapy injection – a fact that one of the researchers insists was clinically irrelevant.Google Scholar
The National Center for Clinical Research Resources at NIH was responsible for rules regarding RSA use. Those rules were withdrawn when the NIH disbanded its CRC program and replaced it with the CTSA program. The CTSA program continues the use of RSAs but the parameters of RSA use are still being debated. See discussion at <https://www.ctsacentral.org/sites/default/files/documents/05031015AMReiderRSA.pdf> (last visited December 14, 2012).+(last+visited+December+14,+2012).>Google Scholar
Morreim, H., “By Any Other Name: The Many Iterations of ‘Patient Advocate’ in Clinical Research,” IRB: Ethics and Human Research 26, no. 6 (2004): 18.CrossRefGoogle Scholar
Id., at 4–5.Google Scholar
Id. In other examples, Vanderbilt used an extended consultation process for parents considering prenatal surgery, and hand transplant subjects at Louisville/Jewish Hospital in Kentucky were required to choose a non-medical ‘go-between’ to assist them with their decisions.Google Scholar
Id., at 5; Needler, N. A. and Goldfarb, N. M., “The University of Rochester Research Subject Advocacy Program,” Journal of Clinical Research Best Practices 5, no. 5 (2009): 14; Easa, D. Norris, K., and Hammatt, Z. et al. “The Research Subject Advocate at Minority Clinical Research Centers: An Added Resource for Protection of Human Subjects,” Ethnicity & Disease 15, no. 4 (2005): 107–110.Google Scholar
Society of Research Advocates, “Standard Operating Procedures: Observation of the Consent Process,” available at <www.srsa.us/?q=content/educational-resources> (last visited June 13, 2012).+(last+visited+June+13,+2012).>Google Scholar
See Needler, and Goldfarb, , supra note 110, at 1.Google Scholar
See Easa, et al., supra note 110, at 2.Google Scholar
See Society of Research Advocates, supra note 111.Google Scholar
Tereskerz, P. M., Clinical Research and the Law (Malden, MA: Wiley-Blackwell, 2012): at 23–36.CrossRefGoogle Scholar
Coleman, C. H., “Duties to Subjects in Clinical Research,” Vanderbilt Law Review 58, no. 2 (2005): 387449, at 395–96.Google Scholar
See Menikoff, and Richards, , supra note 21, at 55.Google Scholar
See Miller, and Brody, , supra note 9; Emanuel, et al., supra note 18.Google Scholar
See Menikoff, and Richards, , supra note 21, at 51–60. But see Grass, and Waring, , supra note 6.Google Scholar
See e.g., Sage, W. M., “Some Principles Require Principals: Why Banning ‘Conflicts of Interest’ Won’t Solve Incentive Problems in Biomedical Research,” Texas Law Review 85, no. 6 (2007): 14131463; Morreim, E. H., “Taking a Lesson from the Lawyers: Defining and Addressing Conflict of Interest,” American Journal of Bioethics 11, no. 1 (2011): 33–34.Google Scholar
See Guttentag, , supra note 1, at 210. Guttentag analogized the physician-experimenter and physician-friend to the separate attorneys, with equal stature, for the prosecution and defense.Google Scholar
Model Rule of Professional Conduct, R. 1.7(b)(3) prohibits representation when it “involve[s] the assertion of a claim by one client against another client represented by the lawyer in the same litigation or other proceeding before a tribunal.” Certain types of representation may also be categorically prohibited by state or federal law, such as the prohibition in some states “that the same lawyer may not represent more than one defendant in a capital case, even with the consent of the clients.” Id., R. 1.7, cmt 16. While framed as ethical canons, the Model Rules of Professional Conduct create legal rather than merely aspirational obligations. See generally Bassett, D. L., “Three's a Crowd: A Proposal to Abolish Joint Representation,” Rutgers Law Journal 32, no. 2 (2011): 387458, at 405.Google Scholar
Johnson, S. H., “Five Easy Pieces: Motifs of Health Law,” Health Matrix 14, no. 1 (2004): 131140, at 131.Google Scholar
As Sandra Johnson has written, “[T]ell doctors that they have a ‘conflict of interest’ in relation to a proposed protocol for research with human subjects, and they believe that you have accused them of unethical behavior….[D]octors tend to assume that a conflict of interest exists only when they actually have made a ‘bad’ decision motivated by their financial interest in the sponsor of the research.” The medical profession in general tends to prefer calling Conflicts between primary obligations, such as medical care and research, “conflicts of obligation.” Id. See Institute of Medicine, Report on Conflict of Interest in Medical Research, Education, and Practice (2009), at 48. In fact, rather than describing medical practice and research as posing conflicting interests, the Report combines these two aims under the “goals of medicine” Id., at 44.Google Scholar
See Sage, , supra note 121.Google Scholar
Model Rule 1.8(h). Not all states have adopted Model Rule 1.8(h) in this form, and some are more restrictive in what they permit. A few states go so far as to proscribe all agreements that prospectively limit a lawyer's liability. See, e.g., New York Rules of Professional Conduct 1.8(h); Alabama Rules of Professional Conduct 1.8(h). New Jersey allows such agreements, but only when “the client fails to act in accordance with the lawyer's advice or refuses to permit the lawyer to act in accordance with the lawyer's advice.” New Jersey Rules of Professional Conduct 1.8(h). According to the Comments to the Model Rules, this independent lawyer requirement is imposed “because [such agreements] are likely to undermine competent and diligent representation.” Thus, there is a recognition and concern that the quality of the representation may be altered (like the “adviser's” advice in the psychological experiments about estimation, discussed above), not merely that the client will be unduly influenced into signing the waiver or misunderstand its import, although those, too, are a concern.Google Scholar
The analogy between the lawyer seeking to limit liability and the physician performing clinical research is not perfect. For one, the legal client is waiving the right to sue for negligence of the lawyer, but the physician-researcher could still be sued for negligence by the patient-subject, however that might be defined. In fact, researchers are prohibited by law from seeking waivers of liability for negligence. 45 C.F.R. § 46.116 (1994); 21 C.F.R. § 50.20 (1995) (prohibiting exculpatory agreements concerning negligence in clinical investigations regulated by the FDA); Vodopest v. MacGregor, 128 Wash. 2d 840, 913 P.2d 779 (1996) (preinjury agreement releasing medical researcher for liability for negligence conduct violates public policy). The point, however, is that almost certainly negligence on the part of the physician-researcher will not be judged in reference to a standard of care that requires exclusive pursuit of the best medical interests of the patient-subject. Second, the lawyer who obtains a waiver of liability from a client has not actually become subject to a different standard of care, but simply cannot be sued by that client for certain deviations from the ordinary standard of care. Others, including professional bar associations, can still hold the lawyer accountable for practicing law to the usual standards of competence and diligence. The point, though, is that this client cannot hold the lawyer legally accountable to such standards and therefore is entitled to another lawyer's guidance on the liability waiver.Google Scholar
The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, Pub. No. 78-0012, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research (1979), available at <http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html> (last visited December 12, 2012).+(last+visited+December+12,+2012).>Google Scholar
“Beneficence” is used in the Report to describe both negative obligations to do no harm and positive obligations to help others.Google Scholar
The Report appears to conflate the two conceptions of beneficence that physician-researchers might promote – the good for society and the good for individual subject-patients by telling us that “avoiding harm [presumably to individual patients] requires learning what is harmful [by doing research],” and although “the Hippocratic Oath requires physicians to benefit their patients ‘according to their best judgment’” (again, a patient-centered approach), “[l]earning what will in fact benefit may require exposing persons to risk” (research subjects).Google Scholar
Kolata, G. and Eichenwald, K., “Stopgap Medicine: A Special Report; For the Uninsured, Drug Trials Are Health Care,” New York Times, June 22, 1999.Google Scholar
As we are hopeful that eventually everyone in the United States will be insured for health care services, questions relating to the application of our proposal to the person seeking free medical care through research participation may be a temporary one. Unfortunately, however, even optimistic projections of current health care reform demonstrate that there will be a segment of the population still uninsured. Much of that group will truly be part of a vulnerable population. They are likely to be less well educated and many teetering on the edge of financial disaster. Some of them will be undocumented immigrants. This is even greater reason to give them additional support.Google Scholar
This list is based on the broad literature involving human subject protections. But some particulars come from the literature on DMCs which attempt, we believe unsuccessfully, to address some of these concerns. Haavi Morreim's work on patient advocates, supra note 105, was particularly helpful.Google Scholar
In using the word “vulnerable,” we intend to use it more broadly than the definition provided in the current regulations.Google Scholar
See Emanuel, and Emanuel, , supra note 28.Google Scholar
Chen, D. T. Miller, F. G., and Rosenstein, D. L., “Clinical Research and the Physician-Patient Relationship,” Annals of Internal Medicine 138, no. 8 (2003): 669672 (describing physicians’ role in advising their patients who are considering volunteering for clinical research).CrossRefGoogle Scholar
Chen, D. T. and Shepherd, L. L., “Advising Patients about Obtaining Genomic Profiles,” Neurology Clinical Practice 1, no. 1 (2011): 512.CrossRefGoogle Scholar
See Chen, et al., supra note 137.Google Scholar
See Wilson, , supra note 103, at 229–30.Google Scholar
See, e.g., Employers’ Fire Ins. Co. v. Beals, 240 A.2d 397, 400 (R.I. 1968), overruled in part on other grounds, Peerless Ins. Co. v. Viegas, 667 A.2d 785 (R.I. 1995).Google Scholar
Fewer than 2% of adult cancer patients participate in clinical trials. Murthy, V. et al., “Participation in Cancer Clinical Trials,” JAMA 291, no. 8 (2004): 27202726.CrossRefGoogle Scholar
There are some reports that some oncologists do not tell their patients about available clinical trials because they would lose them as patients. Klabunde, C. N. et al., “A Population-Based Assessment of Specialty Physician Involvement in Cancer Clinical Trials,” Journal of National Cancer Institute 103, no. 5 (2011): 384397.CrossRefGoogle Scholar
See, e.g., Comis, R. L. Miller, J. D. Aldige, C. R. Krebs, L., and Stoval, E., “Public Attitudes towards Participation in Cancer Clinical Trials,” Journal of Clinical Oncology 21, no. 5 (2003): 830835, available at <http://www.harrisinteractive.com/news/allnewsbydate.asp?NewsID=941> (last visited December 6, 2012).CrossRefGoogle Scholar
Ford, F. G. Howerton, M. W., and Bolen, S. et al., Information on Recruitment of Underrepresented Populations to Cancer Clinical Trials. Evidence Report/Technology Assessment No. 122 AHRQ Publication No. 05-E019–2, Agency for Healthcare Research and Quality (June 2005), Rockville, MD; Kennedy, B. R. et al., “African Americans and their Distrust of the Health Care System: Healthcare for Diverse Populations,” Journal of Cultural Diversity 14, no. 2 (2007): 56–60.Google Scholar
In fact, experience with RSAs in studies involving minority populations has demonstrated that having a patient advocate can improve trust in the research and thus recruitment. See Easa, et al., supra note 110, at 3.Google Scholar