'Incredibly Ashamed': Google DeepMind Scientists Revolt Over Secret Pentagon Deal to Use AI in Warfare
Employees warn Google against classified military AI work to protect the company's reputation and ethics

Google signed a classified agreement with the US Department of Defence on Monday, allowing the Pentagon to deploy its artificial intelligence on secret military networks, just hours after more than 600 of the company's own employees publicly demanded that CEO Sundar Pichai reject the deal.
'The Worst-Case Version'
Andreas Kirsch, a research scientist at Google DeepMind, told Business Insider he was 'incredibly ashamed' after the deal was confirmed. 'I'm speechless at Google signing a deal to use our AI models for classified tasks,' Kirsch said, adding that he had woken to a 'worst-case version' of what employees had feared.
I do not understand how this is "doing the right thing," and I think this violates "don't be evil" quite clearly on many levels.
— Andreas Kirsch 🇺🇦 (@BlackHC) April 28, 2026
I personally feel incredibly ashamed right now to be Senior Research Scientist at Google DeepMind and I wonder how I'm supposed to do my work today
The agreement, first reported by The Information, permits the Pentagon to use Google's Gemini AI models on classified networks for 'any lawful government purpose.' Reuters reported that such deals with major AI labs were worth up to $200 million (£148 million) each.
The contract bars the use of AI for autonomous weapons or domestic mass surveillance without human oversight, but also states that Google cannot 'control or veto lawful government operational decision-making.'
Google told Business Insider the agreement was an amendment to an existing contract.
600 Employees Warned Leadership to Walk Away
The employee letter, signed by staff across Google DeepMind, Cloud, and other divisions, warned that classified military AI work could cause 'irreparable damage to Google's reputation, business, and role in the world.' More than 20 directors, senior directors, and vice presidents signed openly.
'We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways,' the letter read. 'The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.'
Sofia Liguori, an AI research engineer at Google DeepMind in the UK, told Bloomberg the company's response to worker concerns had been to encourage staff to trust leadership. 'But it's all left very broad,' she said.
From Project Maven to a Military AI Pipeline
The backlash echoes Google's 2018 Project Maven crisis, when roughly 4,000 employees signed a petition, and several resigned over a Pentagon contract that used AI to analyse drone footage. Google abandoned the project.
The company has since reversed course. In February 2025, Google removed language from its AI principles pledging to avoid building weapons or surveillance technologies. DeepMind CEO Demis Hassabis co-authored a blog post citing 'a global competition taking place for AI leadership' as the reason. Human Rights Watch and Amnesty International both condemned the reversal.
By December 2025, the Pentagon had launched GenAI.mil, a platform powered by Google's Gemini chatbot available to all defence personnel. In March 2026, Google deployed Gemini AI agents to the Pentagon's three-million-strong workforce at the unclassified level.
What This Means for Billions of Google Users
The classified deal lands as Big Tech firms are on track to spend roughly $600 billion (£444 billion) on AI infrastructure in 2026, and as the Pentagon's own AI budget hit $13.4 billion (£10 billion) for the fiscal year.
For the billions of people who use Google Search, Gmail, Maps, and Android daily, the deal raises a pointed question. The company building its most trusted digital tools is now also building AI for secret military operations where no outside oversight exists.
Google isn't alone in this shift. OpenAI signed its own classified Pentagon deal after revising its 'no military use' policy. Anthropic, by contrast, was labelled a 'supply chain risk' by the Department of Defence after refusing to loosen its guardrails on autonomous weapons and surveillance.
The letter's organisers vowed to keep pushing back. 'Maven is not over,' they said. 'Workers are going to continue organizing against the weaponization of Google's AI technology until the company draws clear, enforceable lines.'
© Copyright IBTimes 2025. All rights reserved.
























