In a lecture at the University of Cambridge this week, Stephen Hawking made the bold claim that the creation of artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity”.
The talk was celebrating the opening of the new Leverhulme Centre of the Future of Intelligence, where some of the best minds in science will try to answer questions about the future of robots and artificial intelligence – something Hawking says we need to do a lot more of.
“We spend a great deal of time studying history,” Hawking told the lecture, “which, let’s face it, is mostly the history of stupidity.”
But despite all our time spent looking back at past errors, we seem to make the same mistakes over and over again.
“So it’s a welcome change that people are studying instead the future of intelligence,” he explained.
It’s not the first time Hawking has been worried about artificial intelligence.
Last year, he joined Elon Musk and hundreds of other experts in writing an open letter asking the governments to ban autonomous weapons that might one day be able to turn against humans.
He’s also previously said that “the development of full artificial intelligence could spell the end of the human race”.
In Wednesday’s lecture, he admitted he was still worried about “powerful autonomous weapons” and “new ways for the few to oppress the many”, which come with artificial intelligence.
But he said f we can think about and address these issues now, the technology also has the potential to do good.
“We cannot predict what we might achieve when our own minds are amplified by AI,” he said.
“Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely we will aim to finally eradicate disease and poverty.”
The Leverhulme Centre of the Future of Intelligence at the University of Cambridge, where Hawking is also a professor, has received more than US$12 million (£10 million) in grants to run research projects that will enhance the future potential of artificial intelligence, while carefully addressing the risks.
The centre was inspired partly by the university’s Centre for Existential Risk, which already offers courses in subjects such as “Terminator Studies”, in order to examine future potential problems for humanity.
While that centre focusses on a range of threats – such as climate change and war – the new Leverhulme Centre will look specifically at the issues that could arise from machines that think and learn like humans.
“Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together,” said director of the Leverhulme Centre, Huw Price.
“At present, however, we have barely begun to consider its ramifications, good or bad.”
With Google already developing artificial intelligence that can learn from its own memory; Elon Musk worrying about humans become the dumb “house pets” of AI in the future; and computer systems already rivalling four-year-olds in IQ tests, it’s definitely something worth thinking about sooner rather than later.
As Hawking says, it might end up being “crucial to the future of our civilisation and our species”.