Artificial Intelligence: Our last Invention
Each of us carries a world inside of us, jailed in a melon-sized space that is the inside of your skull. The crackles of activity that form your now - as you read these words - sweep across the 100 billion neurones that are you. The feel of wet grass between your toes; the smile-creased folds of a beloved’s eyes; the smell of the sea... A wet, grey, buzzing network of cells holds your world. And right now, the human brain is the most complex structure in the known universe – but that may be about to change...
The first computer program I wrote could do 100 maths calculations per second. Now my programs perform 2,000,000,000,000,000 of the same calculations in the same time. That's equivalent to the difference in weight between a teaspoon of soil and Mount Everest.
Despite these amazing advances, the base technologies, and the way we program them, have not fundamentally changed. Most programs are still precise sets of instructions telling a computer chip, etched on silicon, what to do. This is very different to how we think, but other ways of computing are more similar.
“A typical example of an unorganised machine would be as follows. The machine is made from a rather large number of similar units. Each unit has two input terminals, and has an output terminal which can be connected to the input terminals (0 or more) of other units… machines of this character can behave in a very complicated manner when the number of units is large.”
Alan Turing – Intelligent Machinery 1947
When neuroscientists first looked in detail at the areas of the brain that process sensation and thought, they expected to see a vast spaghetti of complex connections. But they saw something surprising: endless repetitions of the same structure, comprising 80 to 100 neurones stacked together. These were later dubbed ‘‘minicolumns’, and 10 million of them form the fabric underlying our thoughts and memories.
The common factor between Turing’s ‘unorganised machines’ and the way our brains work is that complex behaviour emerges from the interaction of large numbers of simpler things. Every snowflake is a different realisation of the emergent properties of freezing water molecules. And emergence is a general process seen across all the natural sciences and at all scales. In biology we can see it most clearly in social insects: individual ants and bees are relatively stupid, seemingly just doing their thing; through their interactions with one another, though, they behave in far more complex ways. No single ant is in control, but together they build vast networks of tunnels, complete with nurseries and food stores. They can also build bridges, and care for their young. Bees create hives with air conditioning and have a communal memory of the exact locations of many different food sources, some up to half a kilometre away. Emergence connects the quantum to the universal and also underlies much of the complexity of human society.
In the same way that complex behaviour of the colony emerges from the interactions of the ants, each ant emerges from the interactions of its cells. In other words, multiple layers of individuality coexist: the cell, the ant, and the colony. You are no different, and, soberingly, all that you are is determined by the way your cells behave and interact; all they are emerges from the networks they contain.
The cells in your brain are the fabric for every thought and memory - shaped by connections to the world outside. A self-aware, self-constructing mirror of a thin slice of the external reality - this is your world and everything you perceive. It evolved over millions of years to help us survive and thrive in nature, fuelled by the food we could collect. But it is not well adapted for now. We live in a vastly different world to our ancestors and our brains have been left behind, yearning for a simpler, communal, more natural life. What if intelligence were freed from these bounds of bread and bone? What new worlds could it create?
Since the time of Turing’s insights, computational scientists have been trying to build systems that think and learn like us - computer architectures based on simple abstractions of how neurones act and communicate. Until recently, progress had been relatively slow. But the last few years have seen a dramatic acceleration, driven by a combination of new approaches to help these networks learn, increases in computing power, and ever larger datasets. This new field of ‘deep learning’ already touches all our lives, in both seen and unseen ways. Very recently, these programs have started to surpass humans on many tasks. They are better than doctors at diagnosing some diseases; they can drive cars more safely, can identify objects in pictures better than we can, and now they beat the best human players at the ‘Game of Go’. Go is a hard game for computers to play and the most optomistic AI experts had predicted this feat was still at least a decade away.
Yet this is just the beginning. Our brains have created these programs, but what happens when, rather than beating us at games, these machines become better than us at designing themselves? Referred to by some as the ‘Singularity’, this would lead, quickly, to machines more intelligent than we can imagine.
Arthur C Clarke presciently predicted in 1968, “as soon as the borders of electronic intelligence are passed, there will be a kind of chain reaction, because the machines will rapidly improve themselves.. the merely intelligent machine will swiftly give way to the ultraintelligent machine...”
But this is not some far off future fantasy. In the past few months, machine algorithms have come to surpass human ability in the design of new algorithms and their powers are developing rapidly. New substrates of computing including quantum, DNA, chemical reaction networks, as well as others, are being developed. It is already possible to grow tiny bits of human brains from stem cells: the neurones wire up and fire just like in living tissue. So the first superhuman intelligence might be a brain in a bath.
This perfect storm of new technology means that we may be much closer to this transformation of everything than many realise. It is likely to arrive well before the final stages of the UK’s new HS2 train project. If we are planning our transportation networks many decades in advance, should we not also be planning for the arrival of a superior intelligence that will create a new world beyond our imagination?
Well designed and robust AIs, set with friendly goals, could transform our world for the better. With proper management, we could support twice the current global population while reversing the destruction of nature. We could all lead lives of luxury and leisure with robot servants, free autonomous transport and time to spend with each other and our communities. Our economic productivity would skyrocket. Most current worries would disappear through better management of world resources. We could solve world hunger, cure all diseases, save the natural world, develop new energy technologies and push the limits of science in service to mankind and the planet.
Success in creating AI would be the greatest event in human history. Unfortunately it may be the last...”
But there is a darker side - autonomous weapons systems and AI war; total surveillance and population control; criminal and terrorist AIs loose in the intenet and able to hack their way into any system and threaten people and infrastructure; or malevolent AIs with a superhuman understanding of human psychology trawling through the clouds of electronic data we leave in our wake, knowing us better than we know ourselves and providing the ability to manipulate individuals on a mass scale to further their aims. The personalisation of propaganda will expand beyond targeted ‘fake news’ stories. Or they might interfere with people’s electronic and social networks to destroy their lives in far subtler ways than a bullet from a drone. Outperforming any stockbroker or currency dealer, they will slowly, incrementally, take control of everything.
For the immediate future, these - apparently still sub-human - intelligences are under the control of people, and their agendas are the immediate issue. But in the future, when AIs are more intelligent than us, they may escape any of our control structures and become fully autonomous. Then they will become all our masters.
“I don’t understand why some people are not concerned.”
If they do take control, what will we be to them? How do we treat the ‘dumber’ animals of the natural world? Along with vast swathes of nature, our closest animal relatives the great apes, are on the brink of extinction. If we instil these values in our new intelligences, we are certainly doomed.
This new world is coming. We can’t stop it, but we can prepare. We can create global rules that help us to harness the immense potential of these technologies and mitigate the risks. Policy makers need to be aware of current advances and make assessments of likely future scenarios. We could introduce laws to, for example, prevent the use of AI to manipulate human psychology and ban autonomous weapons systems. We could create rules of AI transparency so we can see their objectives, allowing public scrutiny. We have the opportunity to build new economic systems that redistribute the fruits of technology to all, equipped with different models of income based on contribution to the health of society and nature and configured to offset the massive job losses that this technology will bring. Your own personal AI could lobby for your interests. We need to start thinking about developing friendly AI technologies to police other AIs in the interests of humanity and the natural world. We need to create global structures where AI goals are subject to wise human oversight.
Done right, the benefits to all will be enormous: a flourishing new world that we would all want to live in, with phenomenal increases in wealth and productivity and greater equality, safety and comfort for everyone. We would be living in a nature reborn. But, in the wrong hands, badly coded and managed without proper thought to unintended consequences, embedded with the aim of taking financial, political and military power, they will destroy us all.
Are our legal and governmental structures fit for the future? Does anyone in power really understand what is going on? We cannot wait for the first disaster before we address this issue – it may be our last. A cruel dystopia is not the worst outcome - life on earth is at risk. We all share a responsibility to help shape tomorrow’s world into something more beautiful, more fair and more natural than it is today. We have to start now.