Science Buffs is covering an upcoming Keystone Symposia live streaming event called “Rigor in Science: The Challenge of Reproducibility in Biomedical Research.” A panel discussion (filmed previously this year) will be streamed for free on November 8th, 2017 and will be followed with a live Q&A where anyone from the public can pose questions. The latest posts on our blog are about reproducibility in science, with an emphasis on biological science, in order to aid the discussion. Find the first post here. Read the second post here

The System Sucks: Why Academic Publishing (As It Stands) Is Bad For Reproducible Science

resizedSadScientist-01

In our first and second posts of this series, we talked about why scientific results (particularly results from biological sciences) might not be easy to reproduce. In this post we are tackling a different issue: what factors incentivize the publishing of crappy results?

Sometimes scientific results are hard to reproduce because they’re wrong. Take for example, this infamous scandal in the stem cell field.

In January of 2014, two papers were published that described a new way of making stem cells from normal cells. Stem cells are cells from a multicellular organism (such as a human or mouse) that start as a “blank slate” cell and can grow or change into almost any other type of cell. Forcing a normal cell to turn into a stem cell is a big deal because it means you can take any cell and turn it into a population of the cells you want to study or use in medicine.

By 2014 scientists had already found ways of making cells “stem-like” by turning on some important genes in adult, developed cells. But the papers from 2014 described a simpler method to turn a normal cell into a stem cell by stressing them with acid or pressure. If that sounds too good to be true, well, it was. In July of the same year, both papers were retracted (retroactively removed from the magazine). Moreover, the first author on both papers, Haruko Obokata, was accused and found guilty of research misconduct.

This is an extreme example of science that can’t be reproduced, in this case because the science was outright falsified. But there are other reasons results can be wrong: perhaps the sample size is too small and the results were actually random, not significant; perhaps the statistics were sloppily handled. In an ideal world, these cases would be caught because scientists check, double check, and triple check their work before they publish. So why aren’t they?

The answer, or one answer, is because the world of scientific publishing can be brutal. Academic scientists often repeat the phrase “publish or perish.” This is a pithy way of saying: “If I don’t have this paper out by the time I’m applying for tenure/my next grant/a promotion, I’m not going to make it in academia.”

published and perished

Published research in high-profile and well-regarded journals is absolutely critical for success as an academic scientist. More papers on a resume translates directly to higher regard from other scientists. For junior researchers, or researchers just starting their own labs, tenure or their jobs can be on the line. For senior researchers, it can be the difference between winning and losing their next grant, or another source of funding.

The top tier journals—that is, the ones that most people in academia and beyond have heard of, such as Nature, Science, and Cell—have their pick of the litter when it comes to papers. These journals want papers to be flashy, exciting stories with data that’s hard to explain away. But as many scientist know, especially biologists, psychologists, and other people working with subjects as variable as humans are, it can be hard to produce the exact same results three separate times. Error bars can be the bane of scientists!

It’s not a surprise that the pressure to publish in top journals, added to the journal’s own desire for exciting and irrefutable results, could lead to some slight fudging of the data. And the race to publish before competitors might mean scientists do an experiment just once instead of three times. But this leads to paper retractions, wasted money down the line, and an institution of science built on, well, not much at all.

retractionindex

Impact factor is a measure of how many times a journal is cited. Here, retraction index–number of retractions divided by number of published articles–has been plotted against impact factor. As impact factor increases, the number of retractions also increases. (Via Retraction Watch: Ferric Fang and Arturo Casadevall, “Retracted Science and the Retraction Index.”)

In earlier posts we talked about how the National Institutes of Health (NIH) is requiring scientists to consider the problem of reproducibility, and how journals are requiring more information about the number of replicates and the reagents used. But these requirements address the symptoms of the reproducibility crisis without addressing the underlying problems with science as an institution. Luckily, scientists and administrators at all levels have worried about this for far longer than writers at Science Buffs have been thinking about it, and lately there have been some suggestions that strike at the heart of the problem.

Richard Harris is an NPR correspondent and science journalist who wrote a book called Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. He has proposed that incentives need to reevaluated by those that control allocation of funds and tenure decisions.

“These are really hard problems to solve. Scientists really need to think differently about how to change the incentives,” said Harris. “This includes scientists in the lab, it includes chairmen of departments, it includes deans, it includes everybody through academia to look at where incentives are pushing people in the wrong direction.”

“Scientists really need to think differently about how to change the incentives.”
Harris, and others, have proposed that tenure decisions be made based not on where a paper was published, but the quality of the research. This would require closer reading of papers, and possibly greater investigation into how research is conducted in the researcher’s lab. Similarly, hiring committees should be asked to think about whether they are placing undue faith in a paper because of the journal in which it was published. They should be asked to think more critically about the experiments themselves, including the statistics used in the process.

Harris acknowledges that changing the culture of science in the United States is a huge undertaking.

“There’s no simple answer,” said Harris. “But a lot of people working together on the same problem can make a huge difference and add up to a lot.”

Science, after all, is a lot of people working together on the same problem. Maybe we can all work on the problem of reproducibility, too.

By Alison Gilchrist (@AlisonAbridged)

Posted by Science Buffs

A CU Boulder STEM Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s