Cancer research shows poor reproducibility

What implications such discrepancies could have on researchers' reputations...
15 November 2022

Interview with 

Tim Errington, Centre for Open Space

Share

Crisis’ might be a bit of an exaggeration, but it’s still important to understand the effects that reproducibility, or lack of it, can have on science-led sectors like medicine. Tim Errington, from the Centre for Open Science in Virginia, has led an initiative to explore the reproducibility of studies on cancer, the results of which he’s published in the journal eLife.

Tim - So we started this project eight years ago, and the way that we decided to go about testing it was to make sure that we could first start with the original papers. So identifying all the information that we could, trying to work with those original authors to understand exactly the way the original research was done. And then we worked with independent researchers from those labs to see if they could conduct it again. Their key there was just trying to make sure that we could have no reason beforehand that we wouldn't get the exact same results. And we did that over eight years, looking at a variety of different papers that were published in cancer biology.

Chris - So how did you choose those papers? Were those ones that were judged to be really seminal in the field or the kinds of papers that really direct or drive a field in a certain direction, therefore sort of lynchpin findings that everyone else is hanging research on? Or was it just a random selection of "we'll test this, test this, test this" and get someone else to see if they could effectively follow the same recipe to the same result?

Tim - So the approach we took here was to look at using that word impact - being careful there - which is really what you were just getting at. What were the papers, the findings, when we started this that were making and getting the most attention in the research literature? Who were people reading? Who were they downloading and citing the findings of? When we started it, these papers were just published, but we were hunting for ones that were getting the most attention because we thought, exactly what you were saying, that, "well, let's look at these ones because they're the ones that are going to have the broadest implications that will presumably drive those fields forward. So let's see how reproducible they are.

Chris - And when you did that, what was the result? How many of those really high impact or important field driving publications did you manage, with your independent teams, to reproduce the same results from?

Tim - Looking at a variety of measures, it was definitely less than half. It's sub 50% that we found. And so that in itself I think is an interesting thing to look at: the number. What I think is more interesting is also some tidbits in there about what that means. So two big aspects we found were that it was really hard to understand transparency of those findings, right? The data wasn't always shared. Methods - those methodology details were lacking even with talking to the authors. We couldn't always figure this out. And the materials, those reagents that were used weren't always easily available. We couldn't get them from anywhere. So that was one part which was a hard process to even attempt. And then the second one was, the one that sticks out to me more, is that effect size, right? The practical significance of those findings. So compared to those original outcomes, our replications were 85% smaller on average. A large effect size means that it's going to have a practical significance, especially in the cancer biology space versus the smaller effect size that we're finding, which kind of suggests that maybe there's not a practical application for it.

Chris - This is not about finger pointing of course, but different scientists are trained different ways with different motivations in different parts of the world. Did you test that or did you look at just one country's science when you were doing this?

Tim - Yeah, that's a great question. We did not look at a country's science. The approach we took was just what was being published in the literature. What was getting that attention? So the original papers that we had were largely based in North America and Western Europe to be honest. But findings from labs all over, we didn't tease apart this aspect. There are other projects that are trying to do that - look at just a single country and ask, "well, how does that look if we just look at a single country's output?"

Chris - I was wondering whether some countries where scientists are actively incentivized: publish a paper in a top tier journal and you get a year's pay on top of your normal salary, for example. I'm aware, people have told me, that that is the case, for example, in China where the bonuses are huge if you publish in big journals. There's therefore an incentive to make sure that your science punches way above its weight, which could lead to some people exaggerating claims, etc.. What are the implications of this? If it's in the cancer field and you have got results which are 85% better than they should do, let's say, does this mean then that people are potentially being misled about the validity of clinical treatments if they take what someone says they've found and it can't be reproduced?

Tim - I'll answer that two ways, yes and no. So all these early findings do definitely find their way out into the media, into the news, into social media as well as blog posts. It gets outside beyond the science sphere, right? And we know that that can impact behaviour and policy. We as researchers might find the most interesting study that tells us alcohol consumption does XYZ in terms of my cancer prediction, and so I might curb my behaviour or maybe red wine is good for me, so now I curb my behaviour the other way. So I think at first can directly impact the individual themselves. It does obviously even impact at the care level. We know that a lot of these findings, especially when they're ones that are looking at diagnostic markers, for example, can find their way into treating patients, that there are doctors and clinicians that will actually take that evidence and move right with it, rightly so as they should. But the problem is, if it's not going to stand or we don't really understand how reproducible it is, that it might mislead them by accident. I think the last thing, if we really think about where all of this research is going, we're hoping it can find its way out into the public to actually make an impact. And this can actually slow that pipeline down. As we try to move findings, eventually trying to get them out into the public and be some type of intervention or drug or treatment that can actually help improve lives.

Chris - And just on that point, if you are a company and you are buying rights and patents to exploit a technology or a finding, does this mean potentially your view and your shareholders are being misled?

Tim - Yes and no. What I'm seeing and hearing - and it's worth saying this is anecdotal - what I'm getting at is there's hesitancy in terms of what we publish. In light of these findings and other ones, it's too good to be true, and that maybe you should just wait a little bit and get evidence from somebody else so that you don't get tricked the way that you just said. I think there's more hesitancy towards taking this and moving it rapidly into application.

Comments

Add a comment