YouTube
YouTube will introduce new guidelines for creators who post AI-generated content on the platform. Pexels

YouTube has announced a major update to its AI policy, which will require artificial intelligence (AI)-generated YouTube videos that look real to carry a prominent label.

There has been exponential growth in the generative AI space in the last few months and there's no dearth of tools capable of generating AI content. These generative AI tools can effortlessly create realistic content and can be used to spread misinformation.

In a bid to monitor synthetically created content shared on YouTube that has the potential to mislead viewers, the online video-sharing platform has announced a new set of guidelines for the creators.

New content labels and disclosure requirements

In its latest blog post, YouTube has announced new guidelines to keep an eye on AI-generated content shared in videos. It is worth noting that the new guidelines do not stop creators from using generative AI tools.

However, the guidelines are designed to ensure they are accountable for the content they share with their audience. In line with this, YouTube will soon require creators to disclose synthetic and AI-generated content shown in their videos.

This rule will be specifically implemented for content creators who use AI tools to create realistic-looking AI content and upload them to YouTube. This is to ensure that anyone who is not familiar with such tools does not end up believing in whatever is shown in the video.

In fact, even experts are likely to sometimes struggle in terms of identifying AI-generated content. YouTube shared a slew of mock-ups of the way the platform will notify users about AI-generated content.

According to one of the screenshots, YouTube will show a disclosure label in the video description panel. The label reads, "Altered or synthetic content: Sound or visuals were altered or generated digitally."

Likewise, there's a label for content uploaded on YouTube Shorts and Dream Screen. However, just adding a disclosure will not be enough for some AI-generated videos.

For instance, YouTube may delete a synthetically created video showing realistic violence. Moreover, creators who repeatedly fail to disclose AI-generated content are likely to face penalties such as content removal or suspension from the YouTube Partner Program (demonetisation).

Aside from this, YouTube will let its music partners request the removal of AI-generated music content that mimics an artist's singing or rapping voice. Once a removal request is submitted, the YouTube team will consider multiple factors to decide whether the content should be taken down.

The new YouTube guidelines are set to be implemented in the coming months and early next year. This move doesn't come as a surprise considering AI-generated and deepfake content has been floating around the internet lately.

Also, the company recently launched its global effort to stop people from using ad blockers while watching YouTube videos. YouTube is also working on an AI-powered bot that will provide additional information about videos you are watching.