Stephen Hawking and Elon Musk are among prominent scientists and experts pledging to ensure the development of artificial intelligence benefits rather than hinders mankind.

Hawking and Musk have been joined by senior figures from Google and Microsoft, as well as academics from Oxford University and the Massachusetts Institute of Technology, in signing an open letter from the Future of Life Institute (FLI) concerning artificial intelligence.

"There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase," the letter states. "The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence.

"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

The letter is published alongside a research priorities document that gives examples of how societal benefits can best be maximised through artificial intelligence.

They include optimising AI's economic impact through labour market forecasting and research into law and ethics surrounding AI.

'The end of the human race'

Theoretical physicist Stephen Hawking has previously warned that AI could wipe out humanity Reuters

Both Hawking and Musk have been vocal about the potential dangers of artificial intelligence in light of several advancements in the field.

In December, Hawking said AI could have serious implications for the future of humanity and could even result in its demise.

"The primitive forms of artificial intelligence we already have, have proved very useful," Hawking said at the launch of an upgrade to his communications system. "But I think the development of full artificial intelligence could spell the end of the human race.

"It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Musk, the Tesla and SpaceX CEO, has spoken several times about how the development of AI is "like summoning the demon", "potentially more dangerous than nukes" and capable of deleting humans like spam email.

The FLI's open letter aims to highlight not just the dangers of AI but also the great benefits associated with its advancement.

"In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely," the letter concludes, "and that there are concrete research directions that can be pursued today."