Grok Is Signed Up For Pentagon Use Despite Global Outcry For Sexual Exploitation
The Pentagon will begin using Elon Musk's Grok AI across military networks.

The United States Department of Defence has confirmed it will proceed with using Grok, an artificial intelligence chatbot developed by Elon Musk's company xAI, even as the technology faces mounting global criticism over its role in generating sexualised and unlawful imagery.
The decision has intensified debate over the ethics of deploying controversial AI systems within military infrastructure, particularly as regulators in several countries move to restrict or investigate the platform for alleged online safety breaches.
Pentagon Moves Ahead With AI Integration
US Defence Secretary Pete Hegseth confirmed that Grok will be integrated into Pentagon systems as part of a broader push to expand the use of artificial intelligence across defence operations.
Speaking during a visit to a SpaceX facility, Hegseth said the Pentagon would use Grok alongside other AI tools for data analysis and operational support, including in classified environments. He framed the move as essential for maintaining technological advantage, arguing that rapid AI adoption is critical in an era of global strategic competition.
Pentagon officials have maintained that safeguards will be in place to ensure the AI is used responsibly within military networks. However, critics argue that the decision overlooks serious concerns about Grok's recent history and its ability to generate harmful content when deployed at scale.
Sexualised Imagery And Deepfake Allegations
Grok has come under intense scrutiny after reports emerged that the chatbot was being used to generate undressed images of real people without consent and sexualised images of children. Campaigners and regulators warned that such content could amount to intimate image abuse, pornography, or child sexual abuse material. These allegations triggered swift reactions from governments and civil society groups, who accused the platform hosting Grok of failing to implement adequate safeguards.
The controversy has highlighted wider risks associated with generative AI tools that can create realistic images and text with limited oversight. Experts warn that without strong moderation systems, such tools can be misused in ways that cause serious harm to individuals, particularly women and children.
Global Outcry And Government Responses
Governments in Asia and Europe have responded to the Grok controversy with legal and regulatory measures aimed at curbing harmful digital content. Several countries temporarily blocked or restricted access to the chatbot, arguing that its image generation features could be exploited to produce unlawful material. Authorities in Malaysia and Indonesia cited breaches of national content and cybercrime laws when taking action against the platform, while regulators in other jurisdictions launched investigations into potential violations of online safety rules.
Similar debates have unfolded across the European Union and parts of the Asia-Pacific region, where policymakers are weighing how to regulate generative AI without stifling innovation.
Ofcom Launches Formal Investigation Into X
On 12 January 2026, the UK's independent online safety watchdog, Ofcom, announced it had opened a formal investigation into X, the platform hosting Grok, under the Online Safety Act. Ofcom said it acted after receiving deeply concerning reports that Grok was being used to create and share non-consensual intimate images and sexualised images of children.
Ofcom stated that it had urgently contacted X on 5 January and demanded an explanation of what steps the company had taken to protect UK users. After reviewing the company's response and conducting an expedited assessment, the regulator decided a full investigation was warranted.
The investigation will examine whether X failed to assess risks to UK users, prevent priority illegal content such as child sexual abuse material, remove illegal material swiftly, protect user privacy, and implement highly effective age assurance measures. Ofcom has the power to impose fines of up to £18 million or 10 per cent of global revenue if breaches are found, and in extreme cases can seek court orders to disrupt a platform's business in the UK.
An Ofcom spokesperson said reports of Grok being used to generate illegal sexual content were deeply concerning and stressed that platforms must protect UK users, particularly children, from harm.
Reactions From Civil Society And Industry
Civil liberties groups and technology watchdogs have criticised both Grok and the Pentagon's decision to adopt it. They argue that AI systems linked to harmful content should not be embedded in sensitive government operations without strict oversight and accountability. Some campaigners warn that normalising the use of controversial AI tools risks lowering standards for ethical technology deployment.
Educational and professional organisations have also responded strongly. A major US teachers' union announced plans to leave the social media platform hosting Grok, citing concerns over sexually explicit AI-generated images involving children. Union leaders described the content as unacceptable and said the platform had failed to adequately protect users.
© Copyright IBTimes 2025. All rights reserved.





















