Solving this scientific hitch

Making sure reproducibility remains a central pillar of the scientific process..
15 November 2022

Interview with 

Andrew Holding, University of York


A thumbs up


So what can we do about it? Will also spoke with biochemist Andrew Holding, who’s at the University of York and has himself had run-ins with irreproducible science from other people that took him a whole post-doc to sort out, about the short and long term solutions that could help scientists to move towards more open and reproducible methods of research and publication...

Andrew - The main win I see for science in general in challenging reproducibility is, as more and more biology research becomes computationally based - so these are technologies like genomics proteomics, many words that end with omics - they use a lot of mathematical methods, and we can publish the code and the data so someone can literally download that and run it on their computer. And that is a massive win for science because that means a person can reproduce the data analysis in an afternoon, maybe a bit longer if it's a little bit challenging to run the code. And then you can see how the thing works. To me, part of the science is the coding. That's a really quick win. If we normalise that behaviour, we can then have people building on that science. We can grow science quicker instead of this idea that we have to keep it hidden and safe in case someone finds a mistake in it because most people aren't producing irreproducible science on purpose.

Andrew - And I think letting people make mistakes and letting people see the workings, how you got your answer, is a huge plus. And I don't think there's any harm in saying, "look, we've been pushing people's papers, their research looked like 10/10 results." Let's just relax and say this is pretty convincing. And then people can honestly show the weakness to their work too. And that is a cultural change which, given the competition in science, is slow to happen, but it is happening. And certainly, on the computational work, the things I mentioned about people giving the code, that's changing a lot more rapidly because that's quite a new field and there's a lot more willingness to try new things. Where I think the momentum in wet lab experiments and the established techniques is, you're trying to change something that's been like that for 50 years, and that's a lot harder, but people are still coming round to the benefits of this.

Andrew - So those small wins are there. Let's put aside the absolutely fraudulent people, which is probably the absolute minority of the problem, and look at the genuine mistakes made by honest scientists who want to get the best science. How can we make it so if they make one of those mistakes, the next paper says, "you know what? I think this" and builds on it. That standing on the work that came before you, that science, not presenting a beautifully polished piece of work that meets a set of criteria that we sort of made for ourselves that don't really exist.

Will - And if we want to shift towards a more open access attitude towards research, is there anything that we can do to prevent institutions from just piggybacking on each other's research?

Andrew - This is something you see quite a lot of people saying: "Oh, if I publish my data open access, if I publish my source code open access, someone can just go and run my code and tweak a few parameters and get a paper out of it." And I'm like "great!" And I think that's an attitude change. So you've got to say, yes, people will piggyback on you. And what we can do is say, look, if someone is piggybacking on you, if someone sees your results and because they have better funding than another country and they can get ahead of you, that we don't see that as a bad thing, we see that as something like, this person did something so good, they generated a new field, they generated a new direction of science, and they don't feel that they're going to be vulnerable because of that. And that's something where sometimes because the way grant funding works and the way it is competitive, that people do feel vulnerable to someone getting ahead.

Andrew - In my experience, though, usually people who take what you've done and run with it run in a different direction. It's very rare that someone has exactly the same idea as you with exactly the same data, especially if you are the person who came up with it. And the benefits of being open and sharing and people expanding what you're working on for you, over the risks, I think mean we should embrace this. But that small concern that maybe you'll not have the next grant because someone's scooped you as we call it, I think we can make better protections to recognise that that is a vulnerability and make people feel more secure. But I think the benefits still massively outweigh that risk.

Will - And so to look longer term, how can we therefore ensure that academics have the environment they need to feel safe producing their work?

Andrew - I mean, this is a really complicated one. We've got the funding environment as it is and then we're looking forward to how we do that in the future. And, at the moment, science is funded quite often, certainly for the smaller research groups, we're short term grants, very competitive, very low success rates. Somewhere between 1 and 10% on quite a lot of these funding bodies. What you need to do is say, right, we can fund more science, we can support these people more. And we are not just going to go for people who have the biggest and flashier science that we fund. We fund people for being consistent and reliable. And how we measure that, those metrics, I'm not going to give you an answer today because I don't think we know what the metrics are yet because metrics of measuring how good science is, is such a challenge. But what I can say is, if we decide we want to change the metrics, we've got the people that can do it. You know, scientists spend their lives analysing data and if we can't work out how to work out how to get the outcome we want from science funding, then we are asking the wrong people to be honest because we should be able to do it

Will - To finish off, the last thing we want to do is undermine all the vital research that is done and is beneficial to all of us by scientific research. So do you see this as a crisis or more as an opportunity?

Andrew - I think it's absolutely an opportunity. If we were to ignore it, stick our heads into the sand, it would become a crisis because people would lose faith and people would lose trust in it. What I am seeing is most of these issues are things that have gone wrong because of people making genuine mistakes. If they publish for real data, then people can correct that. And that's how science has always worked. We know plenty of stages in the history of science that have been competing ideas. Sometimes they've gone backwards, sometimes they've gone forwards, but eventually we come up with a model that we build up. And so this is just another evolution of that ongoing scientific process. So I think this is a massive opportunity to say, "Look, we can do science better. We've seen the challenges", and identify using the skills we have as a scientific community where best to put resources to get the best outcome for everyone who is investing in us as scientists. So that could be charities, that can be governments, and they can then see better results and a more diverse set of results that don't just focus on trying to get there first, to get the biggest splash in the newspapers, to get the next pot of cash.


Add a comment