National Cancer Prevention Month: How’d We Get in This Cancer Research ‘Fine Mess’–and How Do We Get Out?

Remember C. Glenn Begley from “‘A Fine Mess We’re In’: Majority of Cancer Research Findings Not Replicable“? He made somewhat of a splash by asserting that the war on cancer is partially being lost due to sloppy research practices.

Ring a bell? Here goes:

The failure to win “the war on cancer” has been blamed on many factors, … But recently a new culprit has emerged: too many basic scientific discoveries… are wrong.

What that means is that it’s deadly–literally–serious that we as a society find out what’s going wrong in cancer research–and make moves to improve it.

People’s lives depend on it.

*********************************************************************************************

Begley was joined by co-author Ellis in a paper that found that only 6 of Amgen’s 53 ‘landmark studies’ were replicable, in a previous post. [It was an awkward moment.] Well, they’re back in action, explaining–well, how we got in this ‘fine mess.’

A number of factors account for of cancer study after cancer study that can’t be reproduced–not all of them pretty.

To start, perhaps, at the back end of research, let’s talk about where these studies are published. The best journals reign supreme. With a son who’s a professor, I’m more than familiar with the ‘publish or perish’ worldview that surrounds research life–and the premier journals take on a highly significant role.

Begley and Ellis criticize the science journals for looking for papers that will create the biggest publicity buzz–in the research community and out–rather than papers that might lead to practical methods for doctors to help patients.

Then the ball gets rolling. The importance of being published in Cell, Nature, Science, or a handful of others, leads researchers to “cherry pick their experiments to find ‘the perfect story,'” says journalist Nigel Hawkes in the British Journal of Medicine.

It’s not just that publication in the premier journals is an end in itself. Oh no–there’s much more riding on it. It’s that a paper published in a top journal gets the scientist to the next step.

Ferric Fang, who works in the Microbiology Lab at the University of Washington, connects the pressure to publish with the intense competition involved in getting the next grant or job.

“The surest ticket to getting a grant or job is getting published in a high-profile journal,” said Fang in “Cancer Research False Claims“.

And, to the skeptical, it’s not just the grant or job itself–but what they represent: pickings from the big cancer research pot the government and drug companies provide.

For, if nothing else, there is money–big money–in cancer research. One estimate suggests that, since 1971, about $300 billion has been spent finding cures for cancer. This price tag may not have gone unnoticed by certain, less savory, researchers.

A rather cynical piece, Cancer Research of 10 Years Useless: Fraudulent Studies, proposes: “There is a tremendous amount to be gained by getting away with fraud in medical studies [and I am not implying that most studies that can’t be replicated are fraud. I think most are due to carelessness and human error.]. The reason is simple. According to Richard Horton, editor of The Lancet:

A single paper in Lancet and you get your chair and you get your money. It’s your passport to success.

So an environment where a publication leads to buzz leads to grant money which leads to one’s ability to continue researching creates tremendous pressure to produce.

No surprise that researchers can be tempted to leave out data that doesn’t support their conclusions, or look at results from a skewed perspective.

Chad Haney, cancer researcher, adds an additional point, one which seemed particularly relevant after yesterday’s post on retractions and a skewed Excel chart. He comments that improper statistics play an important role in the publication errors–scientists just might not have enough statistical training and therefore will use “an incorrect method or interpretation,” completely innocently–but without consulting the requisite statistician.

Looked at that way, it’s naive enough, really.

And what about that time spent reproducing others’ studies, as the system would, in theory, require? ‘Well, just where would that get me?’ tends to be the common thought.

An article in Reuters suggests that there is simply no incentive to verify another’s discovery. That’s not ‘where it’s at.’ Breaking ground on a ‘landmark’ study gets the press, gets a publication in the important journals, gets the new grants. Reproducing someone else’s work gets you. . . .wait. . . .I’m still thinking. . .

Simply put–if you can quickly write up your findings and get them published, that’s the whole name of the game, with replicability not even entering the thought process. Says Ken Kaitin, director of the Tufts Center for the Study of Drug Development, “You make an observation and move on. There is no incentive to find out it was wrong.”

********************************************************************************

So. . . .a lot of way to get into a fine mess. Do I hear suggestions for getting out?

And I do, actually; it’s not an echo.

Begley and Ellis have a list of recommendations:

1. Researchers need to report–not hide–their negative findings in their papers.

The All Results Journals believe in the value of all findings, but they assert that, currently, “. . .more than 60% of the experiments fail to produce results or expected discoveries. . . .[G]enerally, all these negative experiments have not been published anywhere as they have been considered useless for our research target.”

Naive as I am, I didn’t know that researchers had the option to hide results that didn’t support their theory, so I’m a fan for returning things to the way I believed they were.

2. And back to those journals, who got such a bad rap above. They should take a look–a good, hard one–at their acceptance policies, both to establish if publishing ‘the study of the moment’ is most important, and to clarify if the paper they’re about to publish has been properly analyzed for reproducibility. Think what that could save them on the retraction front.

3. Several scientists involved in cancer research point out an unpleasant truth: Compared to the number of cancer researchers out there, there are very few real success stories. It makes competition all that much more fierce–and concomitantly increases desire to massage results.

But what if reward didn’t always come from finding the supposed ‘latest breakthrough?

Suggest Begley and Ellis:

Institutions and committees should give more credit for teaching and mentoring: relying solely on publications in top-tier journals as the benchmark for promotion or grant funding can be misleading, and does not recognize the valuable contributions of great mentors, educators and administrators.

If not everything rides on the great white whale of a new discovery, the tendency to manipulate results to get there will be much less.

And, coming as I do from a family of teachers and professors, I’m all for this last one–we don’t reward teaching and mentoring enough in this society; time we did some catch-up.

****************************************************************************************

Seems like there’s some pretty clear ideas how we got into this fine mess–and some up-and-coming suggestions about how to get us out.

We fail to follow them at our own peril–and the peril of every patient fighting cancer with only the tools we have in our arsenal now. Those tools–they’re simply not good enough, and we have so much invested in finding better ones. Let’s make that investment pay off.

References

Begley CG, Ellis LM. Raise standards for preclinical cancer research. Nature 2012; 483:531-533.

Begley S. In cancer science, many “discoveries” don’t hold up. NewsDaily March 28, 2012.

Furman J, Jensen K, Murray F. Governing knowledge in the scientific community: Exploring the role of retractions in biomedicine. Research Policy 2012; 41(2):276–290. [See for original of graph.]

Hawkes N. Most laboratory cancer studies cannot be replicated, study shows. British Journal of Medicine 2012; 344:e2555.

Jagsi R, et al. Frequency, nature, effects, and correlates of conflicts of interest in published clinical cancer research. Cancer 2009; 15;115(12):2783-91.

Kolata GB. Reevaluation of cancer data eagerly awaited. Science 1981; 214(4518):316-318.

Naik G. Scientists’ Elusive Goal: Reproducing Study Results. Wall Street Journal December 2, 2011.

Stevenson H. Cancer Research of 10 Years Useless: Fraudulent Studies. Gaia Health August 14, 2011.

candidaabrahamson

I help adults and adolescents through the particular struggles of our time: tension between couples, parenting frustration, blending new families, separation and divorce, (un)employment, cancer, and loss. When relationships come to an impasse, I use mediation techniques to try to ensure that each party will have his/her needs heard and accounted for in a dignified way. In addition to talking, listening, and reframing, I utilizes the tools of metaphor, active teaching, role-playing, visualization, and hypnotherapy.for families and businesses, as well as in cases of divorce.