Trump Signs Landmark Law Criminalising AI Deepfake Pornography: What Does This Mean For Victims?
AI deepfake porn is now illegal. Here's what this means for victims like Taylor Swift.

What was once the stuff of science fiction has turned sinister in the wrong hands. While artificial intelligence was built to streamline human tasks, it's increasingly being weaponised—and one of the darkest uses is non-consensual, explicit deepfakes.
In response, a new federal law has been introduced—and signed into effect by US President Donald Trump—that makes it a crime to create or share pornographic AI deepfakes. The move marks a major turning point in the fight to protect individuals from digital exploitation.
AI Deepfakes Now Officially a Federal Crime
On Monday, 19 May 2025, President Trump signed the 'Take It Down Act' into law during a White House ceremony. The legislation makes it a federal crime to distribute sexually explicit images—real or AI-generated—without the subject's consent.
Under the new rules, tech platforms such as Facebook are legally required to remove such content within 48 hours of being notified. Failure to comply could expose them to legal penalties.
As CNN reported, the bill—officially titled the Tools to Address Known Exploitation by Immobilising Technological Deepfakes on Websites and Networks Act—also compels tech companies to take greater accountability in monitoring and removing harmful content from their platforms.
'With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will. This is wrong, so horribly wrong, and it's a very abusive situation,' Trump said during the signing, according to Mashable SEA.
'This will be the first-ever federal law to combat the distribution of explicit, imaginary, posted without subject's consent... We've all heard about deepfakes. I have them all the time, but nobody does anything,' he added.
Victims Can Now Sue for Damages
The law's passage comes after a wave of high-profile cases involving AI-generated pornographic images of public figures like Taylor Swift and US Representative Alexandria Ocasio-Cortez. These unauthorised deepfakes were widely circulated on social media platforms, especially X (formerly Twitter), sparking outrage and calls for stronger legal action.
But experts warn it's not just celebrities who are at risk. Women with no public profiles are increasingly becoming victims, with reports of AI porn scandals in schools emerging across multiple US states. These manipulated images often lead to harassment, bullying, blackmail, and severe emotional distress.
The Take It Down Act now gives victims legal standing to sue not only the individuals who created the deepfakes but also the platforms that fail to remove them.
Renee Cummings, a criminologist at the University of Virginia, cautioned that enforcement will be key to the law's success. 'The efficiency of this law will depend on how it is enforced and the kinds of punishments it imposes,' she said.
A New Era of Accountability
As generative AI continues to evolve, so do the ethical and legal challenges surrounding its misuse. The Take It Down Act sets a new precedent in the battle to curb digital exploitation, placing responsibility squarely on both creators of harmful content and the tech companies that host it.
Whether it will serve as an effective deterrent remains to be seen—but for now, it offers a long-awaited legal remedy for victims of this disturbing digital abuse.
© Copyright IBTimes 2025. All rights reserved.