Although I probably couldn’t have explained its rationale, I never questioned the anti-pharma animus that pervaded my medical education. The message I received from certain outspoken classmates and fellow trainees was that interacting with pharmaceutical reps was simply wrong. Being caught with a pharma-sponsored sandwich was like being seen throwing compostable items into the garbage: people glared. Being a pharmascold conferred the do-gooder sheen many of us coveted.
I suspect my experience was not unique. Indeed, the American Medical School Student Association (AMSA) now grades medical schools on their creation of a “pharma-free” environment, issuing annual report cards on conflict-of-interest policies and curricula.1 AMSA recommends prohibiting or actively discouraging faculty from giving industry-sponsored talks — but it provides schools with “toolkits, templates, talks, and training institutes” to help them spread the anti-industry word. So, in addition to vying for the best residency slots for their students and the highest board scores, medical schools now compete on how successfully they instill a recognition of industry greed.
After AMSA gave Harvard an F in 2009, some students mobilized to protect their colleagues from an industry-tainted education. The tipping point for one outraged student, according to a New York Times article, was a lecture on statin therapy by a professor who was also “a paid consultant to several drug companies.”2 The student thought the professor focused too much on the benefits of statins and belittled a classmate who asked about side effects. “‘I felt really violated,’ said the student. ‘Here we have 160 open minds trying to learn the basics in a protected space, and the information he was giving wasn’t as pure as I think it should be.’”
This application of language associated with rape and child abuse to the circumstances of education about effective drugs reveals a feature of the conflict-of-interest movement that has fed its contagion and rendered it virtually unassailable: it casts industry interactions as a moral issue. Once moral intuitions enter the picture, the need to rationally weigh trade-offs is often eclipsed by unexamined convictions about right and wrong. And as psychologist Philip Tetlock told me, “Once a moral outrage campaign gets going, it’s hard to stop. People start competing to be virtuous.”
Sacred Values and Invented Harm
Tetlock’s “taboo trade-off” concept elucidates our reactions to collisions between “sacred values” and financial considerations.3 Although scarcity of resources means that “everything must take on an implicit or explicit price,” Tetlock explains, we tend to insist that certain commitments are so sacred that “even to contemplate trade-offs with the secular values of money or convenience is anathema.” Health is one such sacred value. Hence, for instance, the outrage over the prospect of people selling their organs for transplantation. Or the disconnect between our professed commitment to high-value care and the prohibition against Medicare considering cost-effectiveness in coverage determinations. Even clear evidence supporting a practice shift that happens to reduce costs, such as less-frequent mammography screening, is dismissed as a secular intrusion and met with cries of rationing. What price this life? We’d prefer to think there isn’t one. And in conflicts of interest, the sacred–secular clash is obvious.
When we feel our sacred values are compromised, says Tetlock, we’re often less offended by “deviants” than by others who tolerate deviants’ way of thinking. This insight helps explain why conflict-of-interest policies have evolved not through careful data gathering and analysis but through intensification of regulations after each big scandal. A medical school dean probably won’t lose her job if patents aren’t produced under her tenure, but she will be taken to task if she appears too lax in regulating faculty–industry interactions. As Tetlock notes, “To observe taboo trade-off without condemning it is to become complicit in the transgression.”3
An even more problematic aspect of moral reasoning is the inclination to invent harm to justify condemnation. Psychologist Jonathan Haidt asked people to respond to “harmless-offensive” scenarios wherein a social norm is violated but nobody is harmed. In one, for instance, a woman finds an American flag while cleaning her closet, doesn’t want it anymore, so she cuts it into rags to clean the bathroom. Haidt found that people who were offended by social-norm violations worked hard to cling to a sense of wrongdoing, even when they couldn’t find evidence that anyone had been hurt — saying things like, “I know it’s wrong, but I just can’t think of a reason why.”4 Rather than rethinking their reactions on being informed that no harm was done, subjects strained to invent negative consequences. My favorite example is a child who insisted that the flag shredder was causing harm because the rags would clog the toilet and cause it to overflow. As Haidt concludes, moral reasoning is not “reasoning in search of truth,” but rather “reasoning in support of our emotional reactions.”
How might harm invention affect conflict-of-interest regulation? Susan Desmond-Hellmann, chief executive officer of the Gates Foundation, former chancellor of the University of California, San Francisco, and former president of product development at Genentech, has spent her career thinking about how to strike the right balance between innovation and regulating potential conflicts. She sees trust as the most valuable thing physicians can offer patients. “I have always found patients’ willingness to trust remarkable,” she told me. “You take all your clothes off. You tell them all your secrets, all your bad habits. It’s a pretty intimate thing.” But perhaps our search for conflict-of-interest victims has left us with a myopic view of trust. Patients trust us to put their interests above desire for financial gain, but they also trust us to work hard and quickly to find cures for their diseases. Back when Desmond-Hellmann was both treating women dying of breast cancer and working on experimental therapies, she was forced to consider the paternalistic nature of our assumptions as she tried to “protect” her patients from her ostensible conflict. Their stance seemed to be, “Who do you think you are, trying to deny me an experimental therapy? Don’t put yourself in my shoes. I’m in my shoes.”
Disgust-Driven Spin
There’s an old adage that if you haven’t done any negative appendectomies, you aren’t operating enough. Though high-sensitivity CT scans may render its application to appendectomy obsolete, the principle that an approach that increases the likelihood of benefit often also confers increased risk of harm remains pertinent. In evaluating interventions, particularly those that elicit strong emotional reactions, we tend to assume that risk and benefit move in opposite directions.5 Positive feelings toward an intervention can make us assume that a high likelihood of benefit means low risk. And when we find a risk particularly noxious, we may believe that eliminating it will inevitably increase benefit. We don’t evaluate trade-offs and then develop a feeling based on that analysis; our feelings guide our evaluations.
The diagnostic approach to appendicitis, which is relatively affect-neutral, permits clear-eyed weighing of trade-offs. Most physicians would agree that it’s worse to miss an appendicitis case than to operate on someone who doesn’t have one. For many people, however, the medical-industrial complex elicits deeply negative feelings that make it tough to evaluate fairly any intervention aiming to mitigate industry influence. Prominent stories of wrongdoing, resentment toward the very wealthy, and the moral sense that pecuniary interests violate the sacred value of health have fostered a unique brand of disgust. This disgust has focused our attention on eliminating any risk from industry influence, while we consistently fail to account for potential benefits lost.
One hallmark of moral reasoning is the abandonment of consequentialist thinking in considering punishments for acts we’ve condemned. Our intuition is that the punishment should “fit the crime.”6 But such punishments tend to reflect our outrage rather than consideration of potential consequences. Participants in one study were asked to consider punishments for companies whose product had harmed a patient7 — for instance, an influenza vaccine that had killed a child. Though the risk of dying from the vaccine was known and disclosed to parents, it was one tenth the risk of dying from influenza if children weren’t vaccinated. Study participants were told that the company had decided against making a safer vaccine because its profitability was uncertain.
Participants were then given two possible outcomes of penalizing the company — it would lead the company to make a safer vaccine, or the vaccine would be removed from the market, where no other vaccine was available — and asked whether their penalties would differ. If consequences were considered, the penalty would seem rational in the first instance but not the second. But participants’ judgment was apparently guided by a desire for retribution: nearly two thirds did not think the company should be punished less harshly in the second scenario, despite the grave consequences for public health.
Of course, conflict-of-interest policies are not punishments. But I think the desire for retribution against “bad pharma” informs our management of industry interactions in a way that obscures the possibility that we are obstructing medical advances. The challenge is compounded not just by memories of past wrongs but by the fact that some interactions do threaten professional judgment and, ultimately, public health. But though withholding data, falsely advertising, and securing physicians’ loyalty with Hawaiian vacations are egregious and should be prohibited, the resultant perception of corruption is hard to shake in considering interactions characterized primarily by a shared mission to fight disease. How can we better distinguish paid mouthpieces from honest consultants?
Transparency, however well intended, makes disentangling these impressions more difficult. There’s no way around disclosure — we can’t evaluate conflicts rationally without it, and once we’ve shined a light, there’s no turning it off. The problem arises from how these disclosures are cast in the public’s imagination. Proponents insist that transparency is key to maintaining public trust. If beliefs about physician–industry interactions were affect-neutral, that argument would make sense. But injecting transparency into a hostile climate virtually guarantees that fragments of information will be spun into insinuations of wrongdoing.
A Wall Street Journal article criticizing the FDA policy of not disclosing its physician-advisers’ financial ties is a telling example.8 The story features a cardiologist who has received research support and consulting contracts from industry because of his expertise in arterial stents. We are told that he received $100,000 in industry payments over 5 years. No mention is made of how much of it went to his employer or to research, or of the strict institutional de minimis requirements he followed. Instead, we’re told that “Another organization he works with, the Food and Drug Administration, doesn’t appear to mind.”
At issue is the Watchman, an atrial appendage closure device made by Boston Scientific, a company for which the cardiologist had previously consulted on an unrelated product. In keeping with the data, the cardiologist deemed the device less effective than warfarin in reducing the risk of stroke in patients with atrial fibrillation. Yet he voted in favor of the Watchman, believing it should be available for patients whose bleeding risk precludes warfarin use. Despite this appropriate clinical justification, however, the article’s conclusion insinuates that his true motive was financial: “Following the advisers’ vote, Boston Scientific told analysts it expected the Watchman to win FDA approval in the first half of 2015 and eventually reach $500 million in yearly sales.”
Financial conflicts aside, the article doesn’t explain that favorable advisory-panel votes don’t guarantee approval — indeed, the FDA had denied the Watchman approval twice despite affirmative panel votes (the device was finally approved this past March following the third advisory panel meeting). Moreover, a 2006 study examining 76 product-specific meetings showed that their voting outcomes would not have changed had all members with conflicts been removed.9 But such considerations are not part of the standard narrative on which the reporter was drawing: Dr. X has worked with industry. Dr. X has a favorable view of an industry product. Therefore, Dr. X’s decision reflects not clinical and research expertise but a desire for financial gain.
Such flawed syllogistic reasoning has become the norm. Tellers of such tales no longer need evidence of negative consequences in order to incite public outrage against industry and its collaborators; the associations themselves are enough to warrant condemnation, which has become an end in itself. The BMJ, for example, recently published the results of its investigation into the food-industry ties of U.K. nutrition scientists, many of whom serve on government advisory committees, such as the Scientific Advisory Committee on Nutrition (SACN), that are working to halt the obesity epidemic.10 More than a decade’s worth of industry funding of several scientists (mostly for research) is detailed, under the title, “Sugar: spinning a web of influence. Public health scientists are involved with the food companies being blamed for the obesity crisis.” Not mentioned is the fact that the SACN recently drafted dietary guidelines that recommend aiming for a diet in which “free sugars” constitute about 5% of calories — half the previous target — which might cause some readers to question whether obesity-promoting food companies had in fact bought the scientists’ allegiance.11 With such narratives firmly established in the public mind, how do we reverse this trend?
The Vicious Gotcha Cycle
Journalist Matt Bai recently described the transformation of political reporting after the 1987 scandal involving presidential candidate Gary Hart.12 Hart had been running a successful campaign until an informant told the Miami Herald he was having an affair. Though it’s hard to imagine today, politicians’ personal lives were not then considered media fodder, nor particularly relevant to their leadership capacity. Bai describes how the reporting of the scandal suddenly, and without any discussion of the ethical issues, ended Hart’s political career and forever changed political journalism. What was once an endeavor focused on the substance of political agendas became the heated pursuit of revelations about character flaws. “If post-Hart political journalism had a motto,” writes Bai, “it would be: ‘We know you’re a fraud somehow. Our job is to prove it.’”
A similar motto could apply to much reporting on physician–industry interactions. The bad behavior of the few has facilitated impugning of the many. When did you last read a story describing the essential role physician–industry collaborations played in the development of treatments for human immunodeficiency virus or hepatitis C? How about the tools that have contributed to the 40% reduction in deaths from cardiovascular disease over the past 30 years? Instead, the climate is so permeated with assumptions of fraudulence that treatments, like statins, that have revolutionized our ability to prevent and treat disease become pawns in the hunt for wrongdoing.
At best, the endless gotcha quest simply ruins some reputations unfairly. But I think it has proven more vicious, creating a cycle in which each story generates more distrust. The more widespread the distrust, the easier it is to tell a misleading story and the more damaging that story will be to the institution or physician in question. As reputational costs of exposure grow, everyone works harder at damage control, and fewer people defend themselves, because self-justifications may only intensify the criticism; those who are exposed just hope it will go away quietly. As the public observes this spiral of blame and shame, the conflict-of-interest movement has paradoxically achieved what it set out to avert: an erosion of public trust in medicine and science.
And we’ve lost more than trust. Bai’s most disturbing point is that the shift in political journalism has transformed politics itself. The gotcha quest may have “made our media a sharper guardian of the public interest against liars and hypocrites,” Bai acknowledges. But he notes what’s been lost: some people who would make excellent political leaders may forgo running for office to avoid intense scrutiny of their private lives.
I think oversimplified conflict narratives pose a similar threat to medicine, allowing true experts to be replaced — on advisory panels, as authors of reviews and commentaries, in other capacities of authority — by people whose key asset is being conflict-free. Bai can describe what happened after Hart’s demise but can only speculate about what might have happened absent the scandal. The same problem plagues our evaluation of interventions meant to regulate physician–industry interactions: we can’t know what we may lose.
Perhaps effective therapies are adopted more slowly when industry representatives are banned from our workplace. Perhaps we miss opportunities to understand complex medical topics because experts aren’t permitted to write about them. Perhaps life-saving therapies whose development requires the combined talents of clinicians and industry scientists don’t materialize. The invisibility of potential benefits makes rationally weighing the trade-offs we make with conflict-of-interest policies even harder. When we miss an appendicitis diagnosis, we usually find out that we’ve erred. When we prevent the dissemination of expertise, thwart productive collaborations, or dissuade patients from taking effective drugs, we get no such feedback. Meanwhile, we’re incessantly reminded of the so-called risks, even when they’re invented.
Recently, for the first time, I was asked to consult for a medical products company. My first thought was, “This would be fascinating.” My second was, “There’s no way.” I would have to disclose the relationship, my credibility would suffer, and I would be defenseless. That I immediately succumbed to this fear reflects our failure to manage industry relationships effectively.
I’m not suggesting abandoning regulation. When the rules work, they protect us and our patients from fraudulent marketing and twisting of facts. But when rules merely cloak an anti-industry bias in the false promise of scientific virtue, we undermine potentially productive research collaborations, dissemination of expertise, and public trust. The license to trample the credibility of physicians with industry ties has silenced debate and justified the absence of an empirical framework to guide policies. The answer is not a collective industry hug. The answer will have to be found by returning to this question: Are we here to fight one another — or to fight disease? I hope it’s the latter.
Reader Poll on Conflict of Interest
In light of the Medicine and Society series of articles by Rosenbaum and the accompanying editorial by Drazen, we invite you to put yourself in the role of editor and help us decide about the suitability of three hypothetical potential authors of review articles for the Journal. A summary of the community responses to this informal poll will appear during the summer. We thank you for participating in the discussion.
Case 1. Jane Doe, M.D., Ph.D., is a world-renowned researcher in the area of disease X, a condition affecting hundreds of thousands people worldwide. There is a good diagnostic test for disease X, but there are no effective treatments. Doe's lab developed the diagnostic test, and her institution held a patent on key technology related to the test. Doe's institution, her laboratory, and Doe herself received annual royalty payments of $15,000 to $20,000 until 2 years ago, when the patent expired. Doe has consulted for four different pharmaceutical companies; over the past 3 years, each company has paid her over $10,000 per year to develop new therapeutic agents for the disease. Although there are promising leads, no drug has progressed beyond phase 1 safety testing in humans. The Journal is considering soliciting a review article on disease X. Since there are no available treatments, the review will focus largely on disease biology, with indications of where treatments could be used.
Is it appropriate to consider Dr. Doe as an author of this review article? Go to the poll at the top of this page to vote.
Case 2. John Smith, M.D., is a world-renowned clinical trialist in the area of disease Y, a common condition that affects about 1 of every 3000 people worldwide. Smith was instrumental in the clinical trials of three drugs that can be used to treat disease Y; all three are still patented, and there are no generic drug equivalents. The three treatments are all relatively safe, with good efficacy, low adverse event rates, and acceptable side-effect profiles. All three are also moderately expensive, with monthly retail costs of about $300, but most pharmacy benefit plans cover at least one of the three drugs, with a monthly copayment on the order of $50.
Smith continues to conduct research to define who would be most likely to benefit from treatment with these agents. His university receives about $500,000 a year from each of the companies to support his research, but according to university rules, none of that money can be used to support Smith's salary or otherwise benefit him personally. The Journal is considering soliciting a review article on disease Y. It would be a clinical review, with the author asked to indicate how he or she would treat people with the condition.
Is it appropriate to consider Dr. Smith as an author of this review article? Go to the poll at the top of this page to vote.
Case 3. Sam Green, M.D., is a well-established physician in his community and beyond, by virtue of his activities with a number of patient-advocacy groups. Although Green is a general internist, he is viewed as the local expert in disease Z, which has a point prevalence of about 2%. Through a commercial database, Alpha, Inc., learned that Green is the largest prescriber of the company's drug Q in the region. Drug Q is a new treatment for disease Z that is currently available only as a branded drug. A new research study, in which Green enrolled three patients from his practice, shows that drug Q is more effective than the current generic treatment for the disease, with a similar side-effect profile. The margin of enhanced efficacy is such that one would need to treat 15 patients with drug Q to show an advantage over treatment with the generic drug. The cost of drug Q is $100 per month, as compared with $10 per month for the generic.
As Journal editor, you receive an inquiry from Green about your interest in a review article on disease Z. He offers to cover its epidemiology, pathobiology, clinical recognition, monitoring, and treatment in 2000 words, with two tables and two figures. In reviewing the form that lists his financial associations, you note that he has received over $10,000 per year for the past 3 years from Alpha, Inc., for talks about drug Q given at various venues.
Is it appropriate to consider Dr. Green as an author of this review article? Go to the poll at the top of this page to vote.
Funding and Disclosures
Disclosure forms provided by the author are available with the full text of this article at NEJM.org.
Editor's note: This article is Part 3 in a three-part series.
Author Affiliations
Dr. Rosenbaum is a national correspondent for the Journal.
9. Lurie P, Almeida CM, Stine N, Stine AR, Wolfe SM. Financial conflict of interest disclosure and voting patterns at Food and Drug Administration Drug Advisory Committee meetings. JAMA2006;295:1921-1928