Viral Video Claims Claude AI May Have Led Targeting School in Iran — Pentagon Launches Investigation Into Military AI Use
Questions grow over military AI use as TikTok claims Claude role in targeting incident under Pentagon probe

A viral video has sparked international debate after claiming that an artificial intelligence system known as Claude may have played a role in a targeting incident that struck a school in Iran, prompting the Pentagon to launch a formal investigation into military AI use.
The Pentagon has acknowledged it is investigating the role of AI technologies in the incident, including the possible involvement of Claude, developed by Anthropic. The controversy has intensified scrutiny over how advanced AI tools are being used in military decision-making, with officials declining to confirm specific details about the systems involved.
Viral Video Raises Explosive Questions
@nate.b.jones This is not what Ai is for. #ai #chatgpt #learnontiktok #learn #news
♬ original sound - Nate
The debate began after a video posted by creator Nate B. Jones circulated online, claiming the Pentagon is investigating whether AI contributed to the targeting decision. The creator argues that if viewers 'read between the lines' of the Pentagon's statement, the implications could be significant, adding that it is 'almost certainly Claude' being examined, given earlier reports that the system had been used in certain operations. The claims have not been independently confirmed, but they have intensified scrutiny of the Pentagon's growing reliance on AI-assisted technologies.
The Pentagon's AI Investigation
The Pentagon has confirmed it is reviewing the circumstances surrounding the targeting incident, with the investigation expected to examine how AI systems may have been used within the chain of decision-making. Officials have stressed the review forms part of broader oversight of military AI use rather than an immediate determination of fault. Reports have also emerged claiming the Department of Defense is planning to remove Claude from military infrastructure over the coming months, a process that some commentators suggest could take up to six months. If accurate, such a move would mark a significant shift in how the Pentagon approaches AI integration within defence networks.
Growing Tension Around Claude
The controversy arrives amid reports of increasing tension between the Pentagon and Anthropic, the company behind Claude, which is led by AI researcher Dario Amodei. Amodei has previously warned against allowing AI systems to operate autonomously in high-risk environments, emphasising that human oversight must remain central to decision-making and that AI should not function without a human 'in the loop' when life-or-death decisions are involved.
The Human Oversight Question
One of the central issues raised by the controversy is whether existing rules governing military AI use were followed. The US Department of Defense has long maintained policies requiring meaningful human control over autonomous or semi-autonomous systems, designed to ensure AI tools assist operators rather than replace them in critical targeting decisions. Critics argue that if AI played a direct role in the targeting incident, it raises two uncomfortable possibilities: that the Pentagon may be seeking to shift responsibility onto the technology itself, or that established safeguards were not properly enforced.
Wider Debate Over Military AI Use
The controversy has reignited broader debate about the role of artificial intelligence in modern warfare. Supporters argue that AI systems can process data faster than humans and potentially reduce errors in complex combat environments, while critics warn that reliance on automated systems introduces new risks when algorithms are used to analyse intelligence and recommend targets. As nations continue to develop increasingly sophisticated systems, questions about accountability and ethical oversight are becoming more pressing.
What Happens Next
The Pentagon's investigation remains ongoing, and officials have provided limited information about its scope. It remains unclear whether the review will confirm that Claude or any other AI system played a direct role in the incident. As governments increasingly turn to AI to enhance defence capabilities, the outcome of the investigation could have far-reaching consequences for how such systems are deployed and whether stricter safeguards will be required.
© Copyright IBTimes 2025. All rights reserved.



















