A top US general has warned against fitting military hardware with autonomous weapons systems, for fear that they could go rogue.
General Paul Selva aired his concerns over autonomous military technology during a Senate Armed Services Committee hearing on 18 July. Selva was addressing the ethics of ensuring a human is always involved with the decision-making process when it comes to autonomous weapons killing the enemy.
The military must keep "the ethical rules of war in place lest we unleash on humanity a set of robots that we don't know how to control," Selva said in response to questioning by Senator Gary Peters, The Hill reports.
Selva, who is the second highest-ranking general in the US military, was being questioned over the expiration, later this year, of a directive to prevent autonomous systems from opting to kill without human authority. Peters suggested to Selva that the US's enemies would not hesitate to deploy such self-thinking systems, killing without human supervision.
"Our adversaries often do not consider the same moral and ethical issues that we consider each and every day," Peters suggested to Selva, who replied: "I don't think it's reasonable for us to put robots in charge of whether or not we take a human life...[America should continue to] take our values to war. There will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action."
But just because it is against developing autonomous weapons for itself, Silva says the US military will "address the development of those kinds of technologies and potentially find their vulnerabilities and exploit those vulnerabilities."
Elon Musk, the boss of electric car company Tesla and rocket manufacturer SpaceX, has been a vocal force in the need to regulate artificial intelligence. Just a day before SIlva's comments, Musk said AI poses a "fundamental risk to the existence of civilisation...I have access to the very most cutting edge AI, and I think people should be really concerned about it."
Musk added: "AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it's too late."