Distorting the facts
Interview with
Now if you consider what we’ve covered in the programme this month you might spot that we haven’t highlighted any negative results. And we’re not alone. The vast bulk of the research that gets published each week focuses on positive findings. But this, Kevin Gross tells Chris Smith, is the scientific equivalent of shooting ourselves in the foot...
Kevin - So, what we’ve done is take a look at the consequences of a certain tendency in science for the outlets that scientists have to communicate their findings, for those outlets to preferentially communicate positive results. And this tendency exists for very logical and understandable reasons, but this tendency, if going a little bit too far can actually start to degrade the ability of science as a collective enterprise to sift actual true patterns from false patterns. What we’re doing in our paper is looking at how that is sort of mediated by the tendency of scientific publications to emphasize positive results as opposed to negative ones.
Chris - How did you do it?
Kevin - The tools that we use are mathematical modeling. So, we’ve put together a mathematical abstraction of the publication process in a sort of an embarrassingly simplistic way and looked at the dynamics of how scientists as a community evaluate the evidence for a scientific idea. A very lovely metaphor that comes to us from the French Philosopher Bruno Latour thinks about this process as sort of a rugby game on an epistemological pitch where the two end zones represent science as a whole, accepting an idea or rejecting it. Each successive study that gets published sort of pushes that ball towards one end zone or the other. And so, that is more or less what we’ve done.
Chris - And when the ball goes down at one end and a try is scored, that’s when a fact effectively is planted in the psyche, is it?
Kevin - Exactly. That’s meant to represent the outcome where the scientific community as a whole sort of considers the matter to be settled.
Chris - I'm paraphrasing what you're saying, but the problem is, if we take the best players from the New Zealand side, the South African side, and dare I say, the English team, and we load the dice with all the best players on one side, pushing the ball towards the ‘it’s a fact’ side of the equation, inevitably, some things are going to get turned into facts when in reality, there might be scant evidence to support that really.
Kevin - Yeah, that’s a nice way of putting it. The publication process - the human institution we built up to communicate our results to one another. Because it has this sort of very understandable preference for results that seem to suggest that a pattern exists, then if that preference is strong enough, it does create sort of a tendency for the rugby ball to move towards one end zone in a way that makes the whole process a bit less efficient.
Chris - So when you run your model, what emerges as really important drivers and what does this tell us about where we’re going – I don’t want to use the word ‘wrong’ but where we could perhaps have scope for improvement in order to minimize this bias in the future?
Kevin - Yeah. So that’s a great question. That sort of naturally begs the question of what remedies are there or how can we improve the processes that we have. And I think one thing that our model is showing is that one answer that you might think about if we go back to the rugby metaphor is to just make the pitch a bit longer. We can move the end zones back, go a little bit longer before we consider a matter settled. What our model shows is that that might prolong the game but it doesn’t really help in the end. You still are left with sort of an inefficient process of sorting true claims from false ones. What does help is to value the communication of results that do not detect patterns. That is to say to value the reporting of negative results which is I think both a matter of editorial practices at journals or conference proceedings or what not. But there's a larger question than that and that we, as a scientific community have sort of long been in the mindset of valuing positive results more than negative ones. Again, for some good reasons. But I do think we might want to rethink that habit a bit.
Chris - Some people have suggested that one approach might be to encourage people to publish all their data because some of these biases are emerging as a consequence of cherry picking of data. Where do you stand on that?
Kevin - In a world where we had infinite mental capacity to learn about all the work that has been done, it would certainly make sense to publish every last datum. Scientists only have finite cognitive abilities so there is an argument out there called ‘the cluttered office problem’ that if you publish every last datum, you might lose the signals out there in a sea of noise. I think that’s a really compelling question. I think there's still more work to be done towards that end to understand what an optimal or good place to land is on that spectrum. As to whether we should be publishing every datum, but I certainly think we should be publishing more negative results than we currently are.
- Previous Disturbing the carbon balance
- Next Testing the sushi-belt model
Comments
Add a comment