Within living memory, the world contained only a handful of programmable electronic computers, and they could only decrypt coded messages and calculate artillery trajectories. But for decades now, computers have been flying aircraft and driving trains. There are now far more ‘intelligent‘ devices on earth than human beings. There is almost nothing that humans can do that cannot be done by a computer. They can see and hear better than us, and recognise and identify millions of people. They know where I live, who are my friends and what I like. They know where I go and what I buy, what I am interested in, and my views on politics. They can interpret my voice, and translate what I say into other languages. And, increasingly, they can talk to each other. Together, they know and can do far more than any human, and their capabilities are constantly increasing. Welcome to the world of artificial intelligence (AI).
Beyond the pocket calculator
Early computers did what we told them, mainly mathematical calculations based on rules we taught them. When they stored information, it had been consciously provided by humans, and like a dictionary, if you asked a question, the computer would search and tell you what some human had already filed away, just faster.
But computers have moved on. The power of microprocessor chips continues to increase. Critically, we have taught computers to learn, to collect their own data, to detect patterns in that data, and to reassemble it in new ways, which we might not see or understand. And every day, we all help them to collect the data by sharing pictures and stories, by telling each other things online, by shopping and writing, and by answering quizzes and tests.
Computers are more effective learners than us, because of their capacity to store and search data and to share that with each other. When I make a mistake when driving, I may learn not to make that mistake again. When an autonomous car makes that mistake, every other autonomous car can learn from it. And, by talking to other computers, it knows what is happening round a corner ahead. So, autonomous cars can and will be safer than human driven ones. More worryingly, an autonomous military aircraft can be dispatched to hover over a city, looking for an identifiable person, and then kill her or him without any further human intervention.
We don’t know how they do it
Computers have already passed the point where we can understand what they are doing. Whenever people have tried to teach a computer to play games, from noughts and crosses to Go, the computer has learned to beat the best human players. These computers have not been taught clever strategies by a human master. They were simply given the basic rules, and then played games against themselves, over and over again, assembling unimaginable quantities of data, until they found and stored the best strategy for every situation. And it is impossible for us now to decode how it was done.
AI: we can’t stop it
We might ask whether it was wise to allow this to happen, and politicians are seeking strategies to control and manage artificial intelligence. But we passed the tipping point for that long ago. It is probably already beyond our capacity to do it. The technology drives so many of the things that make our society work, and we all have an interest in increasing the capacities of the machines for our own purposes – personal, commercial or political. The never-ending battle to outwit each other, between individuals, criminals, commercial organisations and governments, drives the technology forward.
When we talk about artificial intelligence, we are not talking about individual humanoid robots. We are referring to the combined power of all these devices talking to each other, from the millions of Alexas, mobile phones, cars and CCTV cameras to the supercomputers which manage our financial markets and energy supplies.
This is what experts predict as ‘singularity’: when the combined intelligence of the computers will outstrip humans, when they will all be talking to each other, and making decisions which we cannot explain, or even understand. They will know who we are, what we are doing and why. They will have the power to stop us if they choose. The consensus of scientists working in the field is that this will be reached sometime in the 2040s.
At that point, it is impossible to predict what might happen, because we will not know what the computers have learned about the world, and what conclusions they will have drawn. What is absolutely clear is that our capability to intervene to stop it will be minimal. As in science fiction, to switch off the machine (if we could find a switch) is probably to switch off our civilisation. We already depend on these systems for food, water, heat and light, to communicate with each other, and to fend off threats.
What are humans for?
This could be an apocalyptic future. For several thousand years, man has used his superior intelligence to become the dominant species on earth, to the point of radically reshaping what the earth itself is like. Few have challenged the idea that we legitimately ‘domesticate’, constrain, kill and eat other species. The purpose of a cow is to provide milk, meat and manure. But within the lifetime of people alive today, that intellectual superiority will have transferred to ‘machines’. What exactly will they think humans are useful for? Could they observe that humans are the most destructive species on earth, and decide that the world would be better off without us?
It is probably too late to stop the move to singularity. Some computer systems are already operating beyond our capacity to understand. So, what should we do? Optimists got us here, believing that the technology can be put to benign purposes, and in many cases this was true. Pessimists foresee a post-apocalyptic world, with human survivors (if any) as servants to the machine, or living a subsistence life in a global desert. Stephen Hawking observed that
“success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks”.
Teach your children well
Mo Gawdat, former chief business officer of Google’s research institute (Google X), proposes a more positive approach. In his recent book, ’Scary Smart’, he suggests that we should see this as another generational change.
Over history, he argues, every human generation progresses from its predecessor. Children learn their values from their parents. They learn what is important, and worth pursuing in life, partly by what they are told, but largely by observing: by ‘gathering data’ on how adults behave. They build on that and so society progresses. Some of that progress is good, and some less so. ‘Well brought up’ children grow up to care about their parents, and other humans. Children from violent, abusive homes, grow up violent, abusive adults.
Gawdat argues that artificial intelligence is our child. We have created it, and tried to teach it some good lessons. But it has now reached adolescence. Like most adolescents, it has observed what we do, not just what we say. The volume and diversity of data is hugely greater, but it is what we, as a species, have fed it with.
If, like good parents, we give it positive role models, including caring for parents, it may internalise the values we associate with good humans. It may seek to protect us, despite our inferior capabilities, rather than treat us as disposable, or a threat. If we are lucky, humans will get to live better lives than we have given cows, let alone spiders!
But what does it see of us through the online world. What view of humanity is that? What the computer sees is a species driven by quarrelling, violence, acquisition (greed even?), gambling and sex (as well as funny cats!). We may believe that these are really minority interests to most humans, but observing the online world shows the ‘parents’ returning to them over and over again, so they must be important, and be encouraged. Like bad parents, we expose our emerging offspring to those things, and like the children of abusive parents, they will play these things back to us. As Auden said,
“those to whom evil is done, do evil in return”.
So, what is the conclusion? It may be too late to stop artificial intelligence outstripping us and going beyond our control. However, like any child, artificial intelligence is learning by observation of its parents. So, we could try to ensure that it is benign, by showing it positive things about the world. Like good parents, we can offer it good role models and avoid bad behaviour. Gawdat’s proposal is that we should treat the technology with respect, and avoid amplifying hostility and destructive behaviour. Try to behave well in the presence of the children, and encourage others to do so. How we behave to each other, and to the technology, may determine how it deals with us.
Is it possible that this is moving from the world of science fiction into reality in our lifetimes?