(4 of 9)
The revelation of these and other scandals led to the National Research Act of 1974, which required institutional review boards to approve and monitor all federally funded research. The Department of Health and Human Services followed up by creating what is now called the Office for Human Research Protection, whose job was supposed to be to oversee the IRBs. But the nature of medical research has changed dramatically in the past few decades. "Back then, research tended to be a single investigator working at an academic institution conducting a small-scale clinical trial," says Dr. Jeremy Sugarman, director of the Center for the Study of Medical Ethics and Humanities at Duke University School of Medicine. "As the medicine changed, however, the review system did not."
Until last year, in fact, when the agency's budget tripled, OHRP had just two full-time investigators to monitor more than 4,000 federally funded research institutions. Since 1980, the agency has audited, on average, just four sites a year. The FDA is somewhat more vigilant, making site visits to about 200 of the approximately 1,900 IRBs that oversee research on FDA-regulated products.
Meanwhile, IRBs, which are supposed to be the first line of defense against unethical or badly designed studies, are often overwhelmed by the job. At some large research universities, a single IRB must supervise more than 1,000 clinical trials at once. Indeed, a 1996 report by the General Accounting Office found that some IRBs spend only one to two minutes of review per study. Board members can't possibly be experts in every field; most are in-house researchers whose own studies are likely to come up for review someday. Says George Annas, a critic of current U.S. laws: "Researchers tend to approve research; they know this is how the institution makes its money. They rarely deny anything."
The financial conflicts of interest can extend not only to the institutions but also to the researchers themselves. One of the reasons Jesse Gelsinger's death in the University of Pennsylvania's gene-therapy trial in 1999 seemed especially scandalous was that James Wilson, the principal investigator in the study, held a 30% equity stake in Genovo, which owned the rights to license the drug Wilson was studying; the university owned 3.2% of the company. When Targeted Genetics Corp. acquired Genovo, Wilson reportedly earned $13.5 million and Penn $1.4 million.
This doesn't mean that scientists are pushing bad drugs just to make money. Their interest is in research, and they often need the financial backing of corporate patrons just to get started. After Wilson's financial interests in the Gelsinger case came to light, he insisted that they played no role whatsoever in his decisions, that research was his driving motivation. Yet Marcia Angell, former editor of the New England Journal of Medicine, argues that such a link tends to bias the investigator, even if the bias is unconscious. A recent study by the University of Toronto analyzed 70 studies of a controversial heart drug. The results were telling: 96% of the researchers who were supportive of the drug had ties to companies that manufactured it, and only 37% of those critical of the drug had such ties. As more and more scientists either own stock in or get funding from for-profit companies, the ones who have no industry connections are increasingly rare.