Clapboard Jakob Owens / Unsplash

Actors in the US say they are worried after AI-generated ads were reportedly used to promote shows in a way that made it look like they appeared in sexual scenes they never filmed or agreed to.

The claims involve performers working in the fast-growing micro-drama industry, where short, mobile-first series are heavily promoted on social media and streaming apps. The concern is that some ads may have been digitally altered without the actors' clear permission.

The issue has emerged alongside the rapid rise of microdramas, also known as vertical series. These are short episodes made specifically for phones, often lasting just a minute or two. They've become very popular in the US, attracting major investment and fierce competition among apps vying for viewers' attention. This reportedly resulted into aggresive marketing, with companies pushing out eye-catching ads designed to go viral and stand out in crowded social feeds.

AI-Generated Use of Actors Without Consent

According to Business Insider, actors working in these productions say they were surprised and upset after seeing promotional videos that seemed to make their performances look more sexual than what they actually filmed. In some cases, they say AI or edited marketing clips were used in ways they never agreed to.

One actor, Tess Dinerstein, said she came across an advert for a series called How to Tame a Silver Fox that started with a normal scene before cutting into what looked like explicit sexual content using her image.

'It was really jarring,' she said, adding that she feared the material could affect both her professional reputation and personal relationships. 'I felt like it delegitimised the work I do.'

She also made clear she does not film nude scenes, and worries viewers might now assume otherwise. Her experience highlights a broader concern among actors in the vertical drama industry, where many work on short-term contracts and do not always have control over how their images are used in promotional materials.

Another performer, Faith Orta, said she saw a TikTok ad featuring her character, partially undressed, even though nothing like that was filmed during the actual production. She said it left her feeling uneasy about losing control over how she is portrayed online. 'When AI jumps in and shows the audience something I don't want to show, it takes away the power over my own body,' she said.

David Eves, who also works in the same type of shows, said he saw a promotional video suggesting he was in a threesome scene that never actually happened. 'Not everyone knows I didn't agree to do that,' he said, adding that viewers might still believe the clip is real.

Micro Drama Industry Growth And Pressure

Micro dramas have become a fast-growing part of online entertainment, with short episodes designed mainly for mobile phones. Industry analysts say the format is expanding quickly because people are increasingly watching content on apps like TikTok, especially quick, story-driven clips focused on romance or drama.

As more companies enter the space, competition has become intense. To stand out, platforms are spending heavily on advertising. In some cases, industry estimates suggest that marketing costs can match or even go beyond the cost of actually making the shows, as companies fight for attention on crowded social media feeds.

Some actors say this pressure has led to more attention-grabbing ads, including promotional clips that may be edited or enhanced using AI to boost views. They also claim that production companies have sometimes said these ads were made by outside marketing teams, which leaves unclear who is ultimately responsible.

Legal experts say industry-standard contracts often give producers broad control over how an actor's image is used for promotional purposes. But they also point out that AI tools have changed the landscape, creating unclear legal areas that older contracts were never designed to address.

Can AI-Generated Content Without Consent Be Legally Challenged?

Entertainment lawyer Jonathan Handel said disputes of this kind are difficult to challenge unless contracts specifically restrict how likenesses can be altered or reused. 'It's about the leverage,' he said. 'That's the brutal reality.'

Actors' unions have begun responding to the issue. The Screen Actors Guild–American Federation of Television and Radio Artists reached an agreement in 2023 that included restrictions on digital replicas without consent, though performers and representatives continue to push for stronger protections as AI technology evolves.

Some actors have begun updating their contracts to explicitly prevent AI from manipulating their images or voices. Others are exploring legal protections such as trademarking their likenesses, though enforcement remains uneven, particularly when content is distributed through international platforms.

One actor, Haley Lohrli, said she began receiving unwanted attention online after AI-altered promotional images of her character circulated. She later insisted on contract clauses preventing such manipulation, saying producers have since accepted the terms without dispute.

Tech platforms hosting the ads also face scrutiny. Companies, including Meta and TikTok, say they prohibit sexual or misleading content and require disclosure of AI-generated material, though enforcement is inconsistent. Reports have previously identified cases where synthetic or altered content has appeared without proper labelling.

For now, many performers say they are left with limited options beyond contract negotiation and public complaint.