Google pledged Thursday that it will not use artificial intelligence in applications related to weapons, surveillance that violates international norms, or that works in ways that go against human rights. It planted its ethical flag on use of AI just days confirming it would not renew a contract with the U.S. military to use its AI technology to analyze drone footage.
The principles, spelled out by Google CEO Sundar Pichai in a blog post , commit the company to building AI applications that are "socially beneficial," that avoid creating or reinforcing bias and that are accountable to people.
The search giant had been formulating a patchwork of policies around these ethical questions for years, but finally put them in writing. Aside from making the principles public, Pichai didn't specify how Google or its parent Alphabet would be accountable for conforming to them. He also said Google would continue working with governments and the military on noncombat applications involving such things as veterans' health care and search and rescue.
"This approach is consistent with the values laid out in our original founders' letter back in 2004," Pichai wrote, citing the document in which Larry Page and Sergey Brin set out their vision for the company to "organize the world's information and make it universally accessible and useful."
Pichai said the latest principles help it take a long-term perspective "even if it means making short-term trade-offs."
The document, which also enshrines "relevant explanations" of how AI systems work, lays the groundwork for the rollout of Duplex, a human-sounding digital concierge that was shown off booking appointments with human receptionists at a Google developers conference in May.
Some ethicists were concerned that call recipients could be duped into thinking the robot was human. Google has said Duplex will identify itself so that wouldn't happen.
Other companies leading the race developing AI are also grappling with ethical issues — including Apple, Amazon, Facebook, IBM and Microsoft, which have formed a group with Google called the Partnership on AI.
Making sure the public is involved in the conversations is important, said Terah Lyons, director of the partnership.
At an MIT technology conference on Tuesday, Microsoft President Brad Smith even welcomed government regulation, saying something "as fundamentally impactful" as AI shouldn't be left to developers or the private sector on its own.
Google's Project Maven with the U.S. Defense Department came under fire from company employees concerned about the direction it was taking the company.
A company executive told employees this week the program would not be renewed after it expires at the end of 2019. Google expects to have talks with the Pentagon over how it can fulfil its contract obligations without violating the principles outlined Thursday.
Peter Asaro, vice chairman of the International Committee for Robot Arms Control, said this week that Google's backing off from the project was good news because it slows down a potential AI arms race over autonomous weapons systems. What's more, letting the contract expire was fundamental to Google's business model, which relies on gathering mass amounts of user data, he said.
"They're a company that's very much aware of their image in the public conscious," he said. "They want people to trust them and trust them with their data."