Is there a crisis in science publishing?
Interview with
More than one and a half million papers describing new scientific breakthroughs get published every single year. But is this science actually trustworthy? The answer is that, if it is, then it ought to be possible to reproduce the results independently. But, when researchers from the Virginia based Center for Open Science set out to do this, even with papers published in so-called high impact journals they weren't always getting the same results. Izzie Clarke spoke to Malcolm MacLeod from the University of Edinburgh who reviewed their work...
Malcolm - Essentially what they were trying to do was to say let's take some findings from social sciences research which have been published in journals of high impact like Nature and Science and see whether those findings hold out when we try and test them again in exactly the same way as the original circumstances of testing and they did that for 21 and they found that, for 13, they probably could, and for the rest: not so well. Now there are three patterns that they found essentially. For some studies they were able to replicate both the direction of the effect that had been reported and also the size of that effect. For others, they showed the same direction of the effect but the effect size was a lot smaller. And for others there was no effect at all on the replication studies and that pattern fits with what we've seen now that some studies replicate and others don't replicate at all.
Izzie - So when we say this direction are we saying that “we've done what they've said they've done in the paper but actually we're only getting something that looks a bit like that result and actually it's not as effective as they've reported”.
Malcolm - That's exactly right. So on average the effect sizes in the replication studies are about half the size as they were in the originator of studies. And that's potentially important if you are thinking about taking for instance a drug that was developed for stroke and tested through the laboratory through cell culture and animal models and you look at all of those data and you say “Crikey this drug looks highly effective it improves outcome by 40 or 50 percent. And so if we're doing a human clinical trial we should certainly do one and we won't need very many humans in that trial to show that it works”. But if the true effect size is substantially smaller then you've designed your clinical trial wrong and actually the whole reason for going into a clinical trial might be incomplete. These studies were sampled from those published in high impact journals and previously one of the comments around about the replication effort had been “well of course if you look at everything across the range of all journals then you're going to get some things which work and some things that don't but if you looked at things published in journals of high impact then you wouldn't have that problem”. And it turns out that in fact you do have that problem. I think there's a wider context here because often these efforts at replication are seen to be a criticism of a particular community of researchers in a particular field. But the fact is that in whichever field we have looked for evidence of difficulties with replication we've found them and the more that that goes on and with the contribution of this recent study the more it becomes highly likely that these problems would be prevalent in any field of research which you chose to study.
Izzie - Because that's what I was going to ask I mean are other sciences at risk then how can you reduce that.
Malcolm - Other sciences are at risk. We need to do more to improve the reporting the design and the conduct of those studies. The second thing that we can do is we can try and understand why replication might not occur because now in these replication studies they've done everything they can to nail it down so they're doing exactly the same thing: every variable which they consider to be important to the outcome is controlled and it's the same in the two studies. So what that implies is that there is some variable which we don't know is important which is driving differences in the observed outcome that if we understood it, it might tell us a bit more about the phenomena being tested.
Izzie - I see how can we trust what is published.
Malcolm - So, if you think about research as a product then you need the user of the research to be able to do due diligence on whether that research does what it says on the tin. And that involves skills in critical appraisal which we're now teaching law in our universities and in our institutions. And it also means is if a journal can really go down the line of doing critical appraisal on the work which they publish then it will increase the quality and the veracity and the trust ability of their output and journals are doing this now. And time will tell whether that has an effect. The bottom line is in the same way that you wouldn't buy a car without taking it for a test drive you shouldn't take a research finding from a paper and believe it to be true just because it's newsworthy doesn't mean it's true.
Comments
Add a comment