Google
Google Android cellular data settlement: £100.5m payout for users (For illustration purposes only) Allen Boguslavsky: Pexels

Hundreds of Google employees have signed a petition urging CEO Sundar Pichai to reject any deal that would allow the company's artificial intelligence systems to be used in classified Pentagon operations, warning of potential 'unmonitored harm' if safeguards are not guaranteed.

The letter, dated April 2026 and signed by more than 600 staff across Google DeepMind and Google Cloud, directly challenges reported negotiations between Google and the US Department of Defense over expanded military use of its Gemini AI models.

The petition comes at a time when major tech firms are increasingly being drawn into defence contracts involving artificial intelligence, raising internal and external concerns about oversight, accountability, and potential misuse.

It follows earlier tensions between AI developers and the US government over how far military agencies should be allowed to deploy advanced systems in sensitive or high-risk environments.

Google CEO Sundar Pichai Urged to Reconsider Work With Pentagon

In the letter sent to Sundar Pichai, employees argue that Google currently cannot guarantee its AI tools will not be used in ways that could cause harm without proper monitoring or control.

'As people working on AI, we know that these systems can centralize power and that they do make mistakes,' the employees wrote, according to a copy of the letter shared with The Hill. 'We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.'

The signatories pointed to reported discussions between Google and the Pentagon about deploying Gemini AI models in classified settings. According to reporting cited in the letter, such an agreement could allow the US military to use Google's AI systems for 'all lawful purposes,' though additional safeguards were reportedly discussed to prevent use in mass surveillance or autonomous weapons without human oversight.

However, the employees argue that those safeguards would be difficult to enforce in practice. The letter states that 'the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.'

Ethical AI Deployment At Risk

Google already does some limited work with the Pentagon, but only for non-classified projects. Employees are warning that if Google moves into secret or classified military work, the risks become much bigger.

They say this could seriously damage Google's reputation and how people see the company, because it would mean its AI is being used in sensitive defence operations. Some workers also argue that no matter what safety rules are added, they may not actually be enough to stop misuse in practice.

One of the organisers behind the petition said that technical safety measures alone might not work, because military rules can override or limit those controls. They argue Google should avoid classified military projects completely for now if it wants to prevent harm.

The concerns come after similar disputes with Anthropic. Pentagon wanted AI systems that could be deployed for 'any lawful purpose,' which can include highly sensitive areas like surveillance or even support for autonomous weapons systems. Anthropic pushed back and asked for safeguards, especially around domestic surveillance and fully automated weapons. The government didn't accept those limits.

After that disagreement, Anthropic was effectively cut out of certain defence-related work and labelled a 'supply chain risk' by the Defence Department. That label is unusual and serious, because it's typically reserved for actors considered potentially unsafe or unreliable in sensitive government systems.

Anthropic later challenged this decision in court.

So when other AI and tech companies, like Google, saw this happen, it signalled something important: if you refuse to fully open up your technology for military use, you could lose access to major government contracts or even be formally restricted. This shows a growing split inside big tech companies. While some want to work more closely with governments and defence agencies, others worry about where that line should be drawn and how AI might be used once it enters military systems.