OpenAI Under Fire After US Military Agreement Triggers ChatGPT Subscription Cancellations and Privacy Fears
'Cancel ChatGPT' trend gains traction across social media

OpenAI is facing mounting backlash after confirming an agreement with the United States Department of Defence to deploy its artificial intelligence models on classified government networks, a move that has sparked a wave of ChatGPT subscription cancellations and renewed debate over AI ethics and data privacy.
The deal, which allows OpenAI technology to be used in defence settings, has triggered the 'Cancel ChatGPT' trend across Reddit and X, with users posting screenshots of cancelled subscriptions and urging others to switch to rival platforms. Privacy fears and concerns about military use of AI have quickly become central to the online reaction.
OpenAI has stated that the agreement includes safeguards and 'guardrails' designed to prevent misuse, including restrictions around fully autonomous weapons and mass surveillance. However, critics argue that the language permitting use for 'all lawful purposes' raises questions about how AI systems could ultimately be deployed.
ChatGPT Cancellations and Online Protests
Within hours of the announcement circulating, Reddit threads dedicated to artificial intelligence and ChatGPT were filled with users expressing dissatisfaction. Some described the partnership as a breach of trust, while others shared step by step guides on how to delete accounts and export personal data, as reported by Windows Central which noted that the 'Cancel ChatGPT' movement was going mainstream following the OpenAI–US Department of War agreement.
Although there are no publicly released figures confirming the scale of cancellations, the visibility of the protest movement has grown. Posts using phrases such as 'no ethics' and 'selling out' have trended in AI-focused online communities.
At the same time, Anthropic's chatbot Claude has climbed to the top of the Apple App Store rankings in several regions, a development widely discussed alongside the OpenAI controversy. While it is not possible to attribute the ranking change solely to the Pentagon deal, the timing has fuelled speculation that users are actively exploring alternatives.
AI Ethics and Military Use
The controversy has also intensified comparisons between OpenAI and its competitor Anthropic. Earlier reports indicated that Anthropic declined to proceed with certain government arrangements over concerns related to mass surveillance and fully autonomous weapons systems.
OpenAI, by contrast, has said its defence contract contains clear red lines and additional safety measures. The company has emphasised that its AI models will not be used to develop autonomous weapons and that safeguards are embedded within the agreement.
The debate highlights a broader divide within the artificial intelligence sector over whether commercial AI models should be integrated into military and national security infrastructure. As AI systems become more capable, the ethical boundaries of their deployment are increasingly scrutinised by policymakers, technologists and users alike.
Privacy Concerns and Data Questions
A key driver of the backlash is concern over data privacy. Some ChatGPT users fear that interaction data could be exposed to government agencies or incorporated into defence projects. OpenAI has not indicated that individual user conversations will be shared with the Department of Defence, and there is no evidence of direct access to private accounts as part of the agreement.
Nevertheless, perception has played a significant role in the reaction. For many subscribers, the association with military applications alone has prompted reassessment of their relationship with the platform.
The episode marks one of the most visible public trust challenges faced by OpenAI since ChatGPT became a mainstream consumer tool. As governments worldwide seek to integrate artificial intelligence into security operations, the response to this deal underscores the tension between national defence objectives and consumer expectations around AI ethics, transparency and privacy.
© Copyright IBTimes 2025. All rights reserved.



















