Hawking
Stephen Hawking warns of the potential dangers of artificial intelligence. Reuters

Stephen Hawking has warned that artificial intelligence has the potential to be the downfall of mankind.

The physicist has written an article in the Independent warning about an uncertain future where technology learns to control itself.

Discussing Jonny Depp's latest film Transcendence, which delves into a world where computers can surpass the abilities of humans, Hawking said dismissing the film as science fiction could be the "worst mistake in history".

"AI research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring," he wrote.

Hawking said that the potential benefits of this technology are massive, with the potential to eradicate war, disease and poverty. "Unfortunately, it might also be the last, unless we learn how to avoid the risks."

In the short and medium-term, he said militaries are working to develop autonomous weapon systems, with the UN working to ban these weapons.

Drone
While unmanned drones are ultimately controlled by humans, future weapons could be fully autonomous.

"Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains.

"An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a 'singularity' and Johnny Depp's movie character calls 'transcendence'.

"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Hawking said experts are not prepared for these scenarios. Offering a comparison, he said that if aliens were to contact us saying they will arrive within a few decades, scientists would not sit around waiting for them to arrive.

"Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute.

"All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks."