Online abuse
The rise of Grok AI has sparked a digital crisis, enabling a surge of non-consensual deepfakes that target and degrade women. Pexels

The digital landscape has turned increasingly hostile as the exploitation of Grok AI triggers a surge in unauthorised explicit imagery. This rapid technological weaponisation has pushed the protection of women into a state of absolute crisis. As these sophisticated tools become more accessible, the urgency to address this systemic threat to personal security has never been greater.

'Since discovering Grok AI, regular porn doesn't do it for me anymore, it just sounds absurd now,' shared one fan of the Elon Musk-led chatbot on Reddit. Another agreed: 'If I want a really specific person, yes.'

Various discussions across Reddit and other platforms indicate that the recent, delayed safety measures have failed to resolve the issue, despite the hopes of those troubled by the circulation of explicit Grok-generated photos.

Global Regulators Outpaced by Rapid Innovation

While Grok has certainly opened eyes to the immense reach of modern technology, it exposes a deeper crisis regarding the widespread access to platforms and sharing channels that currently baffle global watchdogs. Even with the UK set to prosecute the generation of private photos without consent, professionals in the field caution that the digital victimisation of women is set to intensify.

Different software providers enforce significantly tougher security protocols. For instance, the AI assistant Claude rejects requests to digitally undress women, explaining that it cannot manipulate personal photographs or modify outfits. Similarly, while Gemini and ChatGPT permit the creation of bikini-clad figures, they block any attempt to generate more graphic material.

Restrictions are significantly more relaxed on other platforms. Within Grok communities on Reddit, individuals trade methods for producing the most graphic, explicit content imaginable using photographs of actual females. In one discussion, members expressed frustration that the bot only permitted topless depictions 'after a struggle,' while blocking the creation of private parts. Meanwhile, some have found that requesting 'artistic nudity' serves as a way to bypass filters intended to stop the generation of fully nude figures.

Dedicated spaces on platforms like Telegram and Reddit are becoming hubs for mastering 'jailbreaking', the practice of overriding safety protocols to generate illicit imagery. Meanwhile, viral threads on X are further normalising the crisis by promoting and explaining how to use nudification apps—programmes specifically designed to manipulate photographs and create non-consensual nude images of women.

The Profitable Grounds for Misogyny

According to Craanen, the channels through which anti-women content reaches the general public have become much wider, remarking: 'There is a very fruitful ground there for misogyny to thrive.'

Data from the ISD last summer uncovered a vast array of services used to digitally strip subjects, drawing in nearly 21 million hits in May 2025 alone. Mentions of such platforms on X hit 290,000 over a two-month period last year. Meanwhile, the American Sunlight Project reported in September that Meta's sites hosted thousands of advertisements for these apps, despite the firm's public commitment to banning them.

'There are hundreds of apps hosted on mainstream app stores like Apple and Google that make this possible,' explained Nina Jankowicz, a disinformation specialist and co-founder of the American Sunlight Project. 'Much of the infrastructure of deepfake sexual abuse is supported by companies that we all use on a daily basis.'

Durham University law professor Clare McGlynn, a specialist in violence against women and girls, expressed her concerns that the situation is set to deteriorate. She noted: OpenAI announced last November that it was going to allow 'erotica' in ChatGPT. What has happened on X shows that any new technology is used to abuse and harass women and girls. What is it that we're going to see then on ChatGPT?

'Women and girls are far more reluctant to use AI. This should be no surprise to any of us. Women don't see this as exciting new technology, but as simply new ways to harass and abuse us and try and push us offline.'

Personal Targeted Harassment of Public Figures

Labour MP for Lowestoft, Jess Asato, has been actively advocating for change, yet she revealed that detractors continue to produce and distribute graphic pictures of her—even after the new Grok filters were implemented. 'It's still happening to me and being posted on X because I speak up about it,' she remarked.

Asato pointed out that the exploitation of women through AI-generated deepfakes is a long-standing issue that extends far beyond a single platform. 'I don't know why [action] has taken so long. I have spoken to so many victims of much, much worse,' she remarked, highlighting the historical scale of the problem.

While the public Grok X account no longer generates pictures for those without a paid subscription, and there appear to be guardrails in place to prevent it from generating bikini pictures, its in-app tool has far fewer restrictions.

The ability to transform clothed images of actual people into sexually explicit material remains accessible, with no barriers currently affecting free accounts on X. The tool readily obeys prompts to depict individuals in bondage gear or compromising sexual scenarios, including the generation of realistic, suggestive substances over the subjects.

The Performative Nature of Digital Abuse

Craanen argued that the creation of explicit deepfakes is often driven by spectacle rather than content, especially when shared widely on X.

'It's the actual back and forth of it, [trying] to shut someone down by saying, "Grok, put her in a bikini,"' she remarked. 'The performance of it is really important there, and really shows the misogynistic undertones of it, trying to punish or silence women. That also has a cascading effect on democratic norms and women's role in society.'