Should we trust old climate measurements?

Data from centuries ago are important for modelling our changing climate, but should we have faith in them?
28 September 2021





Chris has been in touch to ask: "We include measurements taken from centuries ago in our climate models, but are those measurements that our predecessors made actually reliable?"


Ella Gilbert from the University of Reading gives us the lowdown on old climate data...

Ella - That's a really good question. Generally, with climate modelling, we try to integrate as much real world data as we possibly can to get the best kind of picture of what conditions are like when you press go on the model. And people have been collecting measurements of temperatures for a really long time. So it's one of first things that we can measure. But technology has improved quite a lot thankfully since the 1800s and 1900s, when measurements were taken using relatively cruder techniques. For example, using glass mercury thermometers. One of my favourite stats is that before the 1940s to measure sea surface temperature people would throw a bucket overboard on ships, drag up a bucket of water and stick a thermometer in it. So you can imagine that in the process of getting from the sea up over the side of the ship, it would maybe change temperature a little bit.

So not exactly the most accurate, whereas now we have different techniques: we can use underwater buoys and robots to collect our data; we have automatic weather stations these days, which can record temperatures etc. So it's gotten a lot less old school and the ways that we collect those temperature measurements has changed. So stations where we record these temperatures: they might have moved, they might change the hour of the day when they collect the observations, and all of these things actually have an impact on the values that are recorded. You can essentially get a step change when something changes in the way that the measurement is taken. So you have to adjust for all of these things, and there's some algorithms devised by cleverer statisticians than I that account for this. I think they call it a process of homogenisation, where this accounts for the sort of step changes we see when a station moves or it changes the hour that it collects information at.

There are a few massive data centres that collate these long time series that get fed into climate models. One of them would be Berkeley Earth, there's one at the University of East Anglia's Climatic Research Unit, NASA also has one of these. And all of the big kind of centres that collect these temperature records have different methods of doing it, but it's essentially the same thing. If we're thinking about something like a change in the way we collect data, the principle is that you can assume that climatic changes are broadly similar on a regional scale. So that if you're seeing some really dramatic jump in one station, you can assume that that's to do with the instrumentation changing rather than a very sudden change in the climate.

Chris - So Chris's point that our predecessors - scientists of yesteryear - may well have recorded things a bit less accurately than we would today, and possibly less precisely as well. There may well be a bit of variation, but the trend is your friend, because if they did it consistently and they were a bit off consistently, it's the signal changing that matters, not so much the absolute number?

Ella - Exactly. And there's always going to be more uncertainty in yesteryear, because first of all, there were fewer measurements being made and we know that the techniques were slightly less accurate.


Add a comment