Alphabet's artificial intelligence-focused company DeepMind has announced the creation of a new team to answer ethical and societal questions raised with the development of AI, according to a report in The Verge.
The London-based group, DeepMind Ethics & Society (DMES), will conduct and support open research and investigation to compliment the company's work in AI science and application.
According to the company, the team will explore the real-world impact of AI with the dual aim of helping "technologists put ethics into practice", and helping "society anticipate and direct the impact of AI so that it works for the benefit of all".
The key ethical questions the group aims to answer include privacy, economic impact of automation, AI morality and values, AI risks and their management, and governance and accountability.
Currently, the team, which has been in the works for over 18 months, has eight staffers and six externals fellows – including Oxford philosopher Nick Bostrom, who is known for work on existential risk, human enhancement ethics, and superintelligence risks. However, the company expects to see the number grow to 25 over next year.
"We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work," co-leaders of the group, Verity Harding and Sean Legassick, wrote in a blog post announcing the new team.
They plan to publish the research work starting 2018 and establish partnerships with other academic groups researching similar areas.
Earlier this year, DeepMind broke into the news by creating the first machine, which defeated a world champion in Go – an ancient Chinese board game.
The announcement from the company comes as AI continues to be the cause of concern for many, particularly with automation posing a threat to jobs and the development of 'killer robots' – autonomous weapon systems that could work on their own with no human in the loop.
Last month, over 100 technology leaders, including DeepMind's co-founder Mustafa Suleyman and Tesla CEO Elon Musk, wrote a letter to the UN calling for a ban on lethal autonomous weapon systems. "We do not have long to act," the letter read." Once this Pandora's box is opened, it will be hard to close.