New US Law on AI Porn 'Protects Politicians More Than Victims', Critics Warn
Legislation aims to protect victims but raises concerns about free speech and potential overreach

A new US law targets non-consensual sexual images – particularly those generated using artificial intelligence – in an effort to curb online abuse. But critics question whether it strikes the right balance between protection and overreach.
Signed into law by President Donald Trump, the Take It Down Act is intended to make it easier to combat harassment and abuse online. However, concerns remain over whether the legislation could lead to censorship or prove difficult to enforce effectively.
A Law for a Digital Age
The law criminalises the distribution of explicit images or videos without consent, whether they are real or AI-generated deepfakes. Anyone found guilty of sharing such material could face fines or imprisonment, depending on the severity. Specifically, the law states that distributing nonconsensual sexual imagery involving minors can lead to up to three years behind bars, while images involving adults carry a maximum sentence of two years.
Online platforms are also under pressure to act swiftly. Once notified by a victim, they are required to remove the content within 48 hours. Furthermore, they must take steps to delete duplicates or reposts to prevent the material from circulating further. This is the first federal effort to regulate the spread of nonconsensual sexual images, building on existing state laws that have often been inconsistent.
Bipartisan Support and Motivation
The bill was pushed through Congress with notable support from both sides, driven largely by stories of victims. Senator Ted Cruz highlighted a case where Snapchat refused to remove an AI-generated deepfake of a 14-year-old girl, inspiring him to push for stricter federal laws. First Lady Melania Trump also endorsed the legislation, emphasising the need to protect victims from online exploitation.
The legislation builds on existing laws that ban sharing explicit images without consent, but it extends protections to include AI-generated deepfakes that are indistinguishable from real images. The law also defines certain sexualised imagery, like graphic intercourse or nudity, as offences if they are posted without permission. However, this introduces a notable threshold: the images must be nearly identical to real depictions to qualify, which raises questions about its scope.
Concerns About Overreach and Free Speech
Despite the good intentions, critics warn that the law could be misused or cause unintended harm. Digital rights organisations like the Electronic Frontier Foundation argue that the 48-hour removal window may be too tight for platforms to verify the content properly, risking the removal of consensual images or legitimate content. Such swift action could unfairly target sex workers or others sharing lawful material, raising worries about censorship.
Free speech advocates have also voiced concern that the law's broad language might be exploited to stifle criticism or political dissent. Activists caution that the law's focus on images that are 'indistinguishable from authentic depictions' could be used to suppress legitimate content, especially in an era of widespread AI-generated media. Critics fear it could become a tool not just against abuse but against lawful expression, including satire or political commentary.
A Double-Edged Sword
For victims of revenge porn or deepfake abuse, the new legislation could mean faster removal of damaging content. Organisations like the Cyber Civil Rights Initiative continue to offer support and resources, including hotlines and legal advice, to those affected. Yet, the law's effectiveness hinges on how well online services can implement the required processes without overreach.
Some advocates worry that the law primarily serves political motives rather than the victims' interests. President Trump's vocal support, including claims of being unfairly targeted online, adds a layer of scepticism. Critics argue that, in practice, the legislation might be used selectively, possibly arresting content creators for memes or satire rather than protecting genuine victims.
While the legislation marks a significant step in addressing AI-generated sexual abuse, its real impact remains to be seen. The balance between protecting individuals and safeguarding free expression will be tested as technology evolves. Platforms now have a year to establish request-and-removal systems, but concerns about over-censorship linger.
© Copyright IBTimes 2025. All rights reserved.