Let’s do some role-playing.You’re a psychiatrist, attending a meeting of the American Psychiatric Association (APA).
So far so good?
Alright.
Well, you’ve also attended the 2009 and 2010 APA meetings, and, being interested in psychopharmacology (aka drugs), you’ve attended the presentations on medications.
Chances are, you spent a lot of your time getting good news. A lot of good news. It was really a love-fest. Pretty nice, right?
Ok–that’s enough for one day; you don’t have to pretend you’re a psychiatrist anymore. Just wanted you to experience some joyful feelings before too many bubbles got burst.
The news about the meds was so good, in fact, that a couple researchers started to get a little suspicious about the whole ‘feeling-the-love’ thing going in surrounding medication presentations. And, lo and behold, when they looked at the situation more closely, it started to look mighty suspicious.
In their recent article, “Reporting Bias in Industry-Supported Medication Trials Presented at the American Psychiatric Association Meeting,” authors Sen (Department of Psychiatry, University of Michigan) and Prabu (Department of Psychiatry, Yale University School of Medicine) found that, of a total of 278 mediation trial abstracts (195 supported by industry, 83 non-industry approved,
Of the industry supported studies, 97.4% reported results positive toward medication in question, 2.6% reported mixed results, and none reported negative results. In contrast, 68.7% of the non-industry supported results positive with regard to the medication studied, whereas 24.1% reported mixed results and 7.2% reported negative results.
The authors are concerned by their own findings, as they should be, but are quick to point out it’s not some right-wing conspiracy theory hatched up by the APA. Rather, they suggest,
The likely selective reporting present does not imply an intent to deceive or mislead; investigators and industry may simply feel that positive results would be of most interest to meeting attendees.
That doesn’t obviate cause for concern however. The study ends with the following note of caution:
the selective reporting of medication studies could create an impression that the newer, more expensive psychiatric medications are more effective and safer than justified by an unbiased assessment of the evidence.
Sadly, these findings were not exactly breaking news. Back in 2008 the New England Journal of medicine published a study entitled “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy” in which it found that journals, too, were involved. The paper has a cynical slant–but it backs up its attitude with numbers.
Here in the U.S. the Food and Drug Administration (FDA) has a registry of medication trials, and all drug companies are required to register with the FDA all trials they plan to use to support a particular use of their medication. The FDA gets all the dirty details–the exact methods the researchers will use to collect the data, and precisely how they’ll analyze the data. Then the FDA people actually work to confirm analyses. This works as a safety gauge, preventing the industry from reporting only favorable outcomes from their trials.
It also is was a gold mine for the researchers on the “Selective Publication” paper, as they compared drug efficacy claims from publishes articles with drug efficacy claims according to the FDA.
And what should they find, but. . .
Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published [was] associated with the study outcome.
I must say that them’s fighting words.
Suspicions were piqued when the authors found that of the 74 FDA-registered studies they were looking out, a full 31% (23) were never published. Anywhere.
So, you ask. . .why not?
Well. . .the FDA deemed 38 of the 74 studies positive–and all but one of these studies were published. Well and good.
Of the remaining 36, 24 were found negative, and 12 questionable.
So, these studies were either not published (22 never saw the light of day), or. . .they were published, claim the authors ” in our opinion, as positive (11) and therefore conflicted with the FDA’s conclusion” [see the article for more details.]
In conclusion, the studies deemed positive by the FDA were
approximately 12 times as likely to be published in a way that agreed with the FDA analysis as were studies with nonpositive results according to the FDA .
The paper’s conclusion isn’t just surprising in what it implies about the journal’s tendencies in selectivity; it’s startling in what it claims this study indicates about antidepressants in general:
According to the published literature, the results of nearly all of the trials of antidepressants were positive. In contrast, FDA analysis of the trial data showed that roughly half of the trials had positive results. . . As a result of selective reporting, the published literature conveyed an effect size nearly one third larger than the effect size derived from the FDA data. [emphasis mine]
Lead author Erick H. Turner, now assistant professor of psychiatry at the Oregon Health and Science University and medical director of the mood disorders program at the Portland Veterans Affairs Medical Center, said in a an interview with Psychiatric Times,
I spent 3 years at the FDA reviewing studies. . .I was reviewing new drug applications, and I kept seeing these negative studies. I had never seen anything like that before. I asked my boss and he just said, ‘We see that all the time.’ He shrugged and said, ‘That’s the way it goes.’
He knew about it. My coworkers knew about it. People in the industry knew about it. But it was news to me.”
And publication of trials does not take place in a vacuum. Caution the researchers:
selective publication of clinical trials, and the outcomes within those trials, can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio.
***********************************************************************************************************************************************
As an end-note, Turner had apparently found his niche. In 2012 he published a study entitled “Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database.” Same basic premise; same basic techniques.
But not as satisfying for the cynic.
Truthfully, the study was so small that the authors had difficulty reaching significance.
So while they found that, of 24 FDA-registered trials, 4 were indeed unpublished, with 3 of those failing to show the study drug had an advatnage over placeba, and one showing the drug was inferior, and (time for a breath) of the 20 published trials, they felt 5 were not positive but showed ‘some evidence of outcome reporting bias,’ significance was achieved in only two of 3 areas:
[T]he association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest (8%) and not statistically significant. On the other hand, the effect size for unpublished trials. . . was less than half that for the published trials, . . .a difference that was significant.
Turner’s hypothesis, as he told Psych Central, as to why their findings were so different in their antidepressant vs antipsychotic study was something that would bring tears of joy to makers of Zyprexa or Abilify or Risperdal:
When you compare between drug classes and use FDA data, it’s clear that, overall, antipsychotics are more effective than antidepressants. But when you rely on the data in medical journals, the difference between these two drug classes is obscured.
The problem with that?
Publication bias can blur distinctions between effective and ineffective drugs.
*******************************************************************************************************************************************
So there’s always room to be cynical if given an opportunitty.
Which brings us back full circle to the APA conferences.
ScienceDaily stated that Sen and Prabu, authors of the APA piece,
noted the large industry presence and emphasis on research involving medicines that were still “on patent” and being actively marketed to both psychiatrists attending the conference.
And here all this time I thought all the news was good news in the psychiatric drug study department.
Goes to show you what I know.
REFERENCES
Publication Bias May Give MDs an Incomplete Picture of Antipsychotics
Sen S, Prabhu M. Reporting Bias in Industry-Supported Medication Trials Presented at the American Psychiatric Association Meeting. Journal of Clinical Psychopharmacology, 2012; 32 (3):435.
Sherer R. Study Faults Selective Publication of Antidepressant Trials. Psychiatric Times. 25:4.
Turner EH, Matthews AM, Linardatos E, et al. Selective publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine 2008; 358:252-60.
Turner EH, Knoepflmacher D, Shapley L. Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database. PLoS Medicine 2012; 9(3): e1001189.
University of Michigan Health System. “Bias found in mental health drug research.” ScienceDaily, 22 May 2012. Web. 24 May 2012.