Existential risk and maverick science

Can scientists prevent or predict the end of the world?
24 August 2017

Interview with 

Seán Ó hÉigeartaigh, Huw Price, Hugh Hunt, Adrian Currie, Cailin O'Connor, Heather Douglas.

ASTEROIDS

Illustration of asteroids travelling towards Earth

Share

When the end is nigh, will scientists be able to save the day? Connie Orbach explores whether science can predict and prevent the end of the Earth, starting with a rather scary image of the future...

Connie - New York is underwater. AI robots have enslaved humanity and genetically engineered mosquitoes have led to an airborne super-malaria. No… this isn’t the start of Hollywood’s next apocalyptic blockbuster, but a just-about-possible vision of the future. And in this part of the show we’re going to be delving into the world of extraordinary catastrophic risks asking, if the worst happens, does science have our back? I’m Connie Orbach, and this is the Naked Scientists…

To start us off on our journey let’s hear from the experts…

Sean - My name is Seán Ó hÉigeartaigh. I’m the Executive Director of the Centre for the Study of Existential Risk in Cambridge.

Connie - Yeah that’s right. Because these are all examples of existential risk. The type of thing that if it did happen, might knock humanity out completely.

Sean - Risks that might threaten human extinction or the collapse of our global civilisation.

Connie - Sounds pretty scary, right - the threatening of our human extinction? But don’t worry…

Sean - Most of the risks we look at are high impact , but quite low probability. Now, they aren't all, and climate change is an example of one that is quite high probability I think. It’s likely that we will see more global pandemic outbreaks that will cause deaths of millions of people, like the Spanish flu did last century. Then there are things like, for example, it is entirely scientifically plausible that we would be hit by a meteor the likes of which wiped out the dinosaurs. But, the last one of those that hit us was 66 million years ago which, if you think about it, is 660 thousand centuries so the likelihood that it’s going to happen in the next century is vanishingly unlikely.

So we tend to think of ourselves as, in some ways, an insurance policy. When the stakes are so high, when the consequences are so big, we think that some people should be working on these things. It’s not necessarily what everybody in the public should be worried about. In the same way that it would make sense for you to take out insurance in case your house burns down. We shouldn't be worrying about your house burning down all the time, you should just be taking the precautions of not leaving the oven on.

Connie - Phew! So it’s just an insurance policy. And luckily, we have clever insurers like those at the Centre for Existential Risk, otherwise known as CSER, and they’re willing to take out cover. Then, they and others like them can feed back to science and policy and, soon enough, we’ll all be on a fully comprehensive plan - i’s dotted and t’s crossed. Goodbye existential risk…

Well, of course, it’s not that easy. Not least because that would be the end of the programme. But also, science, as it stands, just isn’t that set up to deal with or predict all of these sorts of catastrophes and really, CSER is just a small outpost fighting against a sea of traditional scientific practice. Sean’s colleague, Adrian Currie, laid out the issues for me…

Adrian - The worry with existential risk is that thinking about this low probability/high impact events have involved a bunch of features which makes science, as it’s currently set up, really ill equipped to deal with. I’ll really quickly list those for you: first you’re dealing with unprecedented events. You don’t have any evidence in a sense, you don’t have any analogies. With big rocks hitting the Earth, we know that big rocks hit the Earth in the past, we can go and look.

With robots becoming sentient and chasing us around is something that we don’t really have analogies for that, so it’s very hard to have much evidence. This means that your science is going to have to be speculative; what you say is almost always going to be wrong; what you say is almost always not going to have the kind of evidential support we expect scientific publications to have. So there’s a way in which in order to do this scientifically, to get the ball rolling, is to start figuring out what the landscape is like. We need to pull back some of those standards about what good science looks like and perhaps have different standards.

So instead of saying something like this deserves to be published because it’s got a positive result which has hit my criteria for statistical significance, we might say something like this deserves to be published because it opens up an area that we haven’t thought about very hard yet. Or get something wrong but in a really importantly interesting way, or it has a negative result in it or a set of negative results. These are different criterias for success and I think if we’re going to start thinking systematically about these kinds of events that requires the sort of speculative thinking.

Connie - So you see the problem. We have these big existential risks and, if they did happen, they would be really, truly catastrophic. But they're also really unlikely and unpredictable and science, which rewards getting things right, pushes people to do things which are likely and predictable and because of this no-one really wants to look into existential risk. In fact, it’s a bit of career downer. Here’s Sean again…

Sean - I guess you can divide the people who are thinking about this into two communities. One is the community, like our centre and other centres like them, who have a specific remit to sort of think the unthinkable, if you will, and we’re allowed to be a little bit exploratory and a little bit weird, and that’s fine. But then there’s a whole other community which is people who have skin in the game, who are really involved in these emerging sciences.  Who will sometimes feel like they need to either raise concerns about a particular consequence from a scientific trajectory or who will have an idea that might provide a global solution, but that might overturn an existing set of theories.

I think it’s particularly challenging for those people. One example might be in virology research in the last decade or so. There’s been a lot of debate about the importance, but also the potential risks of doing research to modify the influenza virus. For a number of years, several individuals were raising concerns about both the accidental release from laboratories of modified pathogens, and also the potential risks of publishing this research because it might give bad people bad ideas.

I think that’s a very challenging thing to do because if your whole scientific community is doing this because advancing our scientific understanding is a good thing but, secondly, this helps us to provide tools against natural pandemics. It’s a very hard thing to stand up and say to your colleagues what you’re doing might actually pose as big a risk as the solution it poses. It puts those people in a very uncomfortable position where they may upset their peers; they may alienate the funders who might support their own work; they may even bring their own field into disrepute by causing a public panic about certain types of research.

So I think that makes the role of these people who either have a new idea that overturns existing theories, or have a new concern that pushes against the prevailing ideas in the community that they’re around. I think that’s a very challenging, but a very important position to be in.

Connie - And when people do, they often get labeled as cranks. There’s the 1% of scientists who think climate change is a myth or the proponents of traditional Chinese medicine. In fiction, there’s Back to the Future’s Doc Brown and from history, there’s the many scientists whose names have been lost to the passage of time.

But also, those who were rejected initially were later proven to be right. Galileo and his predecessors who suggested that the Earth orbited the Sun, or Darwin and his theory of evolution. These figures populate our myths and media - sometimes they’re wrong, sometimes they’re right. But, either way, they’re nearly always dismissed.

So there’s two problems here: 1) How do we make science more open to these sorts of people? Let’s call them mavericks so they can think outside of the realms of normal science and help us spot and mitigate existential risk, and 2) how do we tell who’s crazy and who’s correct? Well first of all, we don’t call them crazy…

What we’re trying to deal with is the fact that some of the are people we need to listen to, so using terminology like “crazy,”  which is the terminology we use to push them aside, is what we need to try to avoid doing.

Connie - Oops! Sorry Huw - good point. That’s Huw Price, Academic Director Of Cambridge’s Centre for Existential Risk; he’s been pondering a solution - working title “The Maverick Room.”

Huw - Well, what we need are scientific and technological whistleblowers. The kind of people who are thinking in a slightly abnormal way; they see something that other people don’t see. We need to make it possible for them to put up their hand and get listened to in those sorts of circumstances.

What tends to happen is, as we know with whistleblowers in other fields, is that they attacked by their peers; they get marginalised and ostracised. The idea of what we call a maverick room is we’re interested in the question as to whether it would be possible to create a kind of safe space where, instead of the norms and sociology of science pushing these people out, it created a space in which they could be listened to.

Connie - Talk me through this. How would this maverick room, this safe space look - is it a physical building, is it an internet chat room? What are the kind of practicalities around this, what can I imagine for this space for mavericks?

Huw - Okay. There’s no reason for it to be a physical space. What needs to happen is that to get into it the mavericks need to get a certain kind of recognition, so they need to pass through some sort of competitive process. Suppose that there’s ten British mavericks every year and so they get this little prize as one of the ten mavericks of 2017, but that prize comes with status. Hopefully, there'd be some heavyweight scientific institutions backing this. They would then have the backing of those institutions. Somebody who wants to criticise them will be criticising the institutions.

Of course, people may want to do that - you can imagine the Daily Mail having a field day. But there’s something to push back; there’s the reputations of the institutions that are standing behind it. With that kind of protection, those people who’ve been through that selection process will not be able to be dismissed by their regular peers elsewhere in science.

Connie - Because that’s the problem here is this kind of reputation idea?

Huw - Yeah, that’s right. It’s all about reputation. What tends to happen, and this tends to happen more when there’s a group of people who are following some piece of science which is dismissed by the mainstream, by following that they damage their reputation and you get what I call a reputation trap. So anybody else who even takes them seriously in the sense of just going over there to look at what they’re doing risks falling into the trap themselves and being dismissed by the mainstream, so part of what we’re doing here is sort of deconstructing the reputation trap.

Connie - For Huw it’s all about reputation and giving people a safe, respected place where they can try things out protected from ridicule and risk. But this doesn’t deal with my second problem: we still have to decide who's in and who’s out of this safe space. How do we do that when everyone who goes against normal science has the potential to be a genius?

Huw - Exactly, exactly. If you just set up a committee and, in effect, some funding programmes are doing this because some funding programmes are recognising the value of encouraging innovation in science. So they say we want blue sky out of the box, unconventional thinking. But then they appoint committees who, of course, think in the conventional ways and so it’s very hard for those committees to actually pick the unconventional thinkers or, at least, to pick them in a way in such that they have a reasonable probability of picking winners. Somehow we have to find an incentive structure for the committees so they get rewarded when they pick somebody that an ordinary committee wouldn’t have picked, but it turns out to have something important to say.

Connie - Still a few wrinkles to iron out there then but, Huw’s on the case.

This week we’re asking if science is set up to deal with some of humanity's bigger risks. Huw Price’s idea of a maverick room seems like a nice, fairly simple way to encourage creative thinking in science. But who are these mavericks we’re trying to protect? Well, Hugh Hunt, and that’s a different Hugh - keep track - is an engineer at the University of Cambridge, and while he may not consider himself a maverick, his field is definitely a little controversial among scientists and nonscientists alike.

Hugh - We’re in a position with climate change that it looks like we’re not going to meet our CO2 emissions targets. We’re going to have to figure out a way of cooling the planet, particularly a way of re-freezing the Arctic because it’s melting much faster than we’d like to think, and geoengineering is about man made interventions in the climate.

Connie - Can you give me an example of the sort of man made interventions we could be thinking of?

Hugh - Geoengineering is divided up into two broad categories: SRM which means solar radiation management, and CDR which stands for carbon dioxide removal.

SRM is about reflecting the Sun’s rays back out into space. We can make clouds whiter so that they can reflect sunlight out to space, or we can put stuff into the atmosphere. It’s been proposed that we emulate the effect of a volcanic eruption by putting sulphur dioxide up into the stratosphere, which volcanos do, and that causes global cooling. We could do things like putting mirrors into space - a bit dramatic. These are all SRM, solar radiation management techniques.

Carbon dioxide removal is about perhaps we can get the oceans to absorb more carbon dioxide by making algae, and so on, grow rapidly so you could seed the oceans with oceans with iron filings or something. Or, perhaps, we could capture carbon dioxide from the atmosphere and pump it deep underground - carbon sequestration - carbon capture and storage.

But all of these geoengineering techniques are big exercises. And they're quite scary because we’d be manipulating the climate.

Connie - What sort of reception do you find you get within this field for doing these sorts of things?

Hugh - Geoengineering does seem to be a bit of a sort of Frankenstein science, and we tend to be categorised as being in someway evil. Why would anyone want to mess up our climate? But  we’re currently pumping about 35 billion tons of carbon dioxide into the atmosphere every year; that’s about 5 tons per person on this planet every year - that’s enormous. So the idea that we shouldn’t consider doing something to clean up the mess that we’ve made, to me that’s what would be irresponsible. I think we ought to be looking at geoengineering as a responsible means of dealing with the mess we’ve made.

Connie - What is the situation with geoengineering? People have kind of negative idea around it but is it still happening, is there still research happening; what’s the reality of the situation?

Hugh - Geoengineering is, I think, inevitable. But, unfortunately, research in geoengineering is going very, very slowly. It’s considered to be unwise to research technologies to fix the climate because maybe that means we’re going to take our foot off the pedal in terms of trying to reduce our emissions. That’s a reasonable concern, but the problem with that is that we will probably want geoengineering in ten years time and we won't be ready to do it.

Connie - So what would you change so that you could be working on this?

Hugh - I think it ought to be easier to do outdoor experiments, on a small scale, on geoengineering technologies. At the moment, it’s almost impossible to do outdoor experiments without getting people concerned about the slippery slope that if you start with a small experiment in geoengineering, then you’ll end up doing geoengineering full scale. Well, I think we just have to live with that as an issue because we’ve got to start these experiments to develop our insurance policy against us not achieving our targets for carbon dioxide emissions.

What would I change? I think I would allow small scale experiments at a scale of say one millionth full scale. That would really help us move the science of geoengineering further forward.

Connie - As I said, pretty controversial. And the pros and cons of geoengineering are something that could do with a whole hour to themselves. But, in terms of how science treats it’s mavericks, Hugh finds an interesting sample of one.

Especially as, on the scale of existential risks, extreme climate change is fairly high up there as a likely possibility.

So, if for the purpose of this argument, we decide that considering the possible impacts of things like extreme climate change, giant asteroids, and unregulated artificial intelligence use, some fringe science is worth exploring. And there should be an outlet for people willing to battle against mainstream thinking, is a maverick room really the right way to go about this.

I mean, firstly, is putting the responsibility for our salvation on only a handful of shoulders. Huw Price suggested ten British mavericks a year. It seems a bit like saving a sinking ship by bailing out water with a teacup. And even if our ship isn’t actually sinking, these sorts of risks are broad. Can a handful of scientists really be expected to spot every iceberg in the ocean?

And finally, who are these mavericks? The examples I’ve given and people I’ve interviewed so far are all white men. Does that in itself limit what we can do?

Cailin - Diversity matters for science, especially diverse backgrounds. Because people who have different life experiences notice different things in the world and they make different assumptions about it.

Connie - That’s the University of California Irvine's Cailin O’Connor.

Cailin - I work on things at the intersection of philosophy, and biology, and economics. The first primatologist studied male primates because they were men. And then, when women joined the field they studied the female primates and learned all these other things about it. So they made these special discoveries motivated by their personal identities.

In the cast of existential risks, there can be cases where particular populations are at special risk. So if we think about global warming, for example, people who are on low lying islands are at special risk, existential risk that is not necessarily threatening people in different countries. If those people on those low lying islands don’t have access to science, if they don’t have social security or job security, if they’re not able to join scientific communities, that might really affect the way we think about these kinds of risks. They might affect the sort of science we do about it.

Connie - What can we do to change that and to make it more open so that we can have more of these conversations?

Cailin - One thing that some people have pointed out as a good way to solve these kinds of problems are funding early career researchers, because young scientists tend to be more diverse than old scientists. Just as a historical path dependency, there used to be more white people and men in science, now there are more people of colour and more women. So, if you fund young researchers they’re going to be a more diverse bunch in general.

A lot of people have pointed out that when you have the opportunity to set up some kind of discussion, if you set up a maverick room say, you have a lot of control over who you invite to the table. So, in cases like that you can just say let’s get different voices in the room, hear what different people have to say. Maybe that will tell us about perspectives that aren’t ones that would be obvious to us.

Connie - But also, if you have a maverick room, you’re dependent on knowing who those people are already and there’s an issue there, no?

Cailin - Yeah, sure there is. Science and academia is based on networks of people knowing each other and who get invited to conferences, and groups like that depends a lot on just who you know. That’s a challenge that’s maybe hard to solve, but maybe it’s a responsibility of people to try to at least look for other voices if they can to bring to their discussions.

Connie - Yes. Our maverick room as a safe space would provide the protection needed for more diverse thinkers to speak out. But, with science the way it is, finding those people in the first place might provide it’s own challenge. Is there a way to encourage creative thinking at a wider level? Remember Adrian Currie from earlier… He thinks there might be.

Adrian - I think one of doing it is really changing the incentive structures that tend to create the conservative science. One of my colleagues, Shahar Avin, has really interesting ideas about funding science through lotteries instead of through peer review. Peer review is a process where, in effect, what you do is you write your paper, or you write your funding application and your peers, that is to say other people, other scientists who work in the same discipline as you who are experts in the same thing that you’re an expert in, look at you work and basically say whether it’s good or not, whether it ought to be published, whether it ought to be funded.

One thing that seems great about this is it encourages this kind of inter-subjectivity - the fancy word that we would use for people agreeing on stuff. That seems important, it build consensus, but the problem is it also serves as a kind of gatekeeping function. Often if you’re a reviewer and you get a piece of work and you’re like, this does not fit the usual ways that we do this, you’re going to reject it, so Shahar Avin’s thought is if you remove that aspect and in fact you have lotteries determining funding. That means first you don’t have to spend so much time writing these annoying grants, but second it’s going to remove that incentive to be particularly pleasing to your reviewers. That’s an example of how you might encourage this kind of maverick thinking without having people to have to be anti-establishment, or rebels, or antisocial in some respects.

Connie - That all sounds very nice and everyone likes a lottery, but that also sounds like you could end up with a lot of stuff being done because there’s a reason for peer review, right? It’s a quality control and if you’ve got a random lottery of science that’s surely going cause you some issues with what science is actually being done? Also for diversification of the type of science you could end up with. You roll a dice six times and everytime you get a one, you could end up with everything clustered in one part.

Adrian - You’re absolutely right that a pure lottery doesn’t look very nice. What you want is the ideal combination of these things. So again, I’m just drawing on Shahah’s work here; one way you could do it would be to have a kind of lower bar so you do have a review process, and the reviewers basically throw out a bunch of them. Then there are the ones that meet some agreed standard, relatively low bar, and there’s maybe some that everyone definitely says yes to. So you can imagine there’s the middle ones, the ones that aren't definitely no, the ones that aren’t definitely yes, do a lottery for those ones in the middle.

Connie - Enabling mavericks more widely seems to deal with some of the problems of the maverick room but, practically, it sounds much harder to put into action. Either way, encouraging scientists to think more creatively must be a positive move in a world that feels ever more unpredictable. However, as a way of combatting existential risk it does seem to be, well, a little belated. It appears to me that the existential risks we’re talking about are risks related to new technologies: genetic engineering, artificial intelligence, and, well, the car. Should science be taking more responsibility for the work it puts out there in the first place. Heather Douglas from the University of Waterloo seems to think so…

Heather - It’s because scientists, so far, are also human beings, also have responsibilities to think about the ways in which their work might be used to exacerbate existential risk even if that’s not what they intend it to mean.

Connie - Well, I think we can agree that all scientists are human beings, but we can also ask how do they practically go about taking responsibility because, let’s face it, lots of science carries potential risk but it also carries great potential benefit. We don’t want this sort of work to stop now do we?

Heather - Certainly scientists are also very much involved in efforts to stem the tide of nuclear proliferation, the risk nuclear war at various sorts of national levels. Some scientist groups have science and diplomacy efforts that are centred around some of these things - science and human rights efforts. I think they’re already having conversations about what our responsibilities for scientists [are]. That the responsibilities of scientists are not just to produce new stuff, and make new discoveries, and offload it off onto an utterly unprepared public. That’s actually not such a great plan and even if taking more time and being more reflective about the implications of one’s work, and having more conversations about the shaping of it, even with non-scientists, or scientists from other disciplines and expertise slows science down; that is probably, at this point in time, a valuable trade off that it’s better to be a bit more reflective and a bit more careful about the directions we take on scientific research even if it means things don’t move quite as quickly in most areas.

A lot of scientists who work at the science policy interface have noticed that governance is lagging horribly behind our technological innovation. I don’t expect governance to ever catch up. In fact, being on the edge of the new means, in some ways, that there’s always going to be something ungovernable about it, which is why I think responsibility is always going to have to be a big part of it. That you can’t just depend upon external regulatory agencies to make it work. But if everything feels like it’s accelerating and getting away from us, slowing down is probably an okay thing.

Connie - Slowing down science sounds like 1) very hard to do, it kind of feels like a runaway train at times and 2) like something that people aren’t going to want to do. Is there really an appetite for this and a response within the community?

Heather - I think one of the problems with slowing down science is all the incentive within scientific practice and scientific research we’re currently stacked towards publish faster, get your paper out the door, that’s how you get credit, that’s how you get priority. You have to make a discovery first, you have to get the paper in Nature and in Science, so there are all these pressures to do it more quickly. But I think over the past ten years, we’ve seen a lot of the cost of that. A lot of the concerns over scientific replication are driven by the desire for speed. A lot of concerns over scientific fraud are driven by the desire for speed.

So I think the scientific community, even internally, is already seeing the problems with this continual pressure to publish more, and more, and more and wondering about how do we take the pressure off that. But I think the same can be said for the broader societal context that if scientists want to continue to experience the support of the public then they have to not just produce results that are reliable and not fraudulent, which is of course true, but they also have to think about their relationship between science and society, and the implications of their work for the broader society.

Connie - Practically, are you hopeful that this can happen?

Heather - Yes, but I’m a terrible optimist. It’s a blind spot of mine. It probably makes it easier to do my work, but I can’t tell you that I think that it’s grounded in a robust data set. But, in my conversations with scientists over the past 15 years, it seems to me that scientists are increasingly aware of the complexities of interfacing with society and are willing, with assistance, to begin to take these things on.

Connie - That was University of Waterloo’s Heather Douglas with a plea for slow science. An idea that does seem to be at odds with Hugh Hunt’s battle against climate change. So how do we balance the books? Maybe it’s a matter of a more nuanced approach to scientific regulation, in general, taking into account how far down the road we are for each risk and acting accordingly.

Whatever we do about existential risk, I can’t emphasise enough how unlikely most of these scenarios and mass panic over minute possibilities does not seem helpful. In the words of Seán Ó hÉigeartaigh “if we’re concerned about our house burning down, the least we can do is not leave the oven on.”

Comments

Add a comment