Why can't scientists replicate results?

The root of all evil...
15 November 2022

Interview with 

Danny Kingsley, Open Access Australasia


Stacked coins


What is causing this to occur? According to Danny Kingsley, from the executive committee of Open Access Australasia, the structure of academia, and the pressure on scientists to either publish, or perish, is responsible, as she explained to Will Tingle...

Danny - Publishing these papers is ostensibly about communicating your research: to say, "I did some research and I found something out and this is what I found out." But in reality, the need to publish papers is something that researchers have to do for their careers. So if you can demonstrate that you've done some research in a particular area and that people thought it was important enough to write about it elsewhere, then you are more likely to get a grant than somebody who says, "I'm interested in doing this research, but I can't demonstrate that I've ever done any research before in the past."

Will - And does the need to publish skew the types of papers that end up being produced?

Danny - Yes, it does in a couple of ways. So one is there's a pure need for volume. In Australia, there used to be a system which just counted the number of papers that you published, and what was happening in that environment was that the number of papers increased dramatically. And the way that worked was, I might do some research, and so what I do is write four papers based on that research, just taking slightly different angles on the research outcomes that I've done rather than writing one. The other way is that need to try and publish in a journal that has a high journal impact factor. So "fancy pants" journals like Nature or Science that people may have heard of, they have very high impact factors and so they're quite prestigious journals to publish in. So it's very competitive to publish in those journals. The submission rate is much higher than the publication rate. So those sorts of journals have a very high rejection rate. Sometimes 95% of the articles that get submitted to those journals are rejected. So that means that there is an imperative for people who want to get published in those sorts of journals to have novel results: results that are surprising, and that unfortunately can mean that there are some poor practices on behalf of the people writing the work to make their results seem more novel. And sometimes it's fairly benign. It might be simply, "oh, that's a bit of an outlier. I won't mention that outlier because it actually makes it look slightly less interesting or less novel." But there are other times where it can be more problematic, which is things like what is called HARK-ing, which is hypothesising after the results are known.

Danny - So instead of saying, "I am seeking to find an answer to this question, do my results look at the data and say, "Yes, that question was validated', or "No, it proved not to be true." Instead I do the research, look at the data and say, "Actually I'm going to say that my question was this other thing because then I can demonstrate with this data that I was right with my hypothesis. And it's worth noting that retractions of papers - when somebody finds there's a problem with a paper and it gets retracted from the record - they tend to happen more often in high profile journals than they do in smaller journals, possibly because there are more eyeballs on those journals, but also quite probably because there is this need for novelty. And so that kind of poor practice is potentially more likely with papers that are submitted to those journals in the hope that they get published.

Will - So putting all of this together then, how do these factors all mean that there is a lack of reproducibility?

Danny - So reproducibility is complex. It's very difficult to reproduce exactly the same circumstances in exactly the same environment. So it's not surprising that there are situations where you can't exactly reproduce the outcomes, particularly if you're talking about studies that involve humans or animals because they are obviously going to differ slightly each time. The lack of reproducibility is to do with things like the size of the study and those sorts of issues. But the reason why we are not doing a lot of reproducibility is we're not reproducing work to ensure that it is valid is because that would not get rewarded because it has already been published. So there is no value in reproducing. There's also a risk in trying to reproduce somebody's work if you try and reproduce it and are unable to. You've got to call it then and say, "Professor Jones' work doesn't stand up." And if you are a subordinate to Professor Jones, that could be - what shall we say... career limiting.

Will - And science is hard with all the best intention in the world. You could attempt to reproduce someone's study, but the sheer nebulous amount of parameters involved in every experiment means that something was different that was out of your control.

Danny - It might be that there's a stack of magazines on the machine and that's affected something. It might be something that you don't even realise is affecting the outcome of your results that you haven't put into your methods because you don't think it's relevant, but turns out to be relevant.

Will - Is there need for a better communication of methodology? Because sometimes scientists try and replicate work, but they weren't given the full instructions.

Danny - Yeah, there is actually. It's quite interesting. There's a couple of journals now which are video journals and you video the experimentation process. That is an ability then for you to see the environment you are in. So it does give a different view literally of how the experiment was undertaken. That does allow a different way of communicating. It does of course mean that involves a different way of setting yourself up when you're doing your research process and also the process of editing that and sending it off for publication. There are extra steps associated with that. And of course that then means that's time away from you writing papers that are going to get you the reward. So there is something of a selflessness of the people who are experimenting with this type of thing. But as we make it more normal, then we're going to end up with a better result, literally for them, and for us, our society, in terms of better use of funds of our research, because that often is taxpayer money, and also better outcome for the research process.

Will - We do not wish to alarm people, but how widespread would you say that this problem is?

Danny - There are many, many papers that are not reproducible, for many of the reasons that we are talking about today. But the issue of deliberate fraud and reproducibility because somebody has done the wrong thing deliberately, is very, very minor. We need to understand that science by its nature is questioning itself. It's never finished. So any outcome, any result needs to be built on by others, then reproducing some of that work or taking that idea and building it into something else. So we are always questioning the results in science. That is a normal thing to do. But what we don't want to be doing is questioning the endeavour of science itself.


Add a comment