The logo of Meta Platforms' business group is seen in Brussels
Meta says it is developing a tool to detect AI-generated content ahead of the 2024 elections. Reuters

Alarmed by a global wave of deepfakes related to upcoming elections, Meta said it is ramping up its efforts to identify and label AI-generated images, building on its existing initiatives to combat misinformation.

In line with this, the Mark Zuckerberg-led company is developing a tool that will identify AI-generated content when it surfaces on Facebook, Instagram and Threads.

Currently, Meta labels only AI-generated images that were generated using its own tools. Expanding its effort in this direction, the company said it will start adding "AI generated" labels to content made using tools from Google, Microsoft, OpenAI, Adobe, Midjourney and Shutterstock.

In a blog post, Nick Clegg, Meta's president of global affairs, confirmed that the company will begin to label AI-generated images made using external sources "in the coming months" and take this approach through the next year.

The top executive also noted that Meta has been working with "industry partners to align on common technical standards that signal when a piece of content has been created using AI".

To recap, Facebook caught a lot of flak during the 2016 presidential election for allowing foreign actors, largely from Russia, to create and spread inaccurate content on its platform.

The platform has been subject to a lot of exploitation, particularly during the COVID-19 pandemic, when Facebook users were spreading vast amounts of misinformation. As if that weren't enough, the site was also used by holocaust deniers and QAnon conspiracy theorists.

Is Meta ready to handle the spread of misinformation?

Meta is showing that it is prepared to deal with bad actors, who are expected to use more advanced forms of technology to disrupt the impending elections.

However, it is worth noting that detecting AI-generated content could turn out to be an arduous task sometimes. A study from Stanford scholars shows services designed to identify AI-generated text, tend to exhibit bias against non-native English speakers. It is not easier for images and videos as well.

To minimise this uncertainty, Meta is teaming up with other AI companies that use invisible watermarks and some form of metadata in the images created on their platform.

However, there are ways to remove watermarks. For instance, the watermark Samsung adds to AI-edited photos can be removed using another AI tool. Meta plans to address that problem too.

"We're working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers," Clegg wrote. "At the same time, we're looking for ways to make it more difficult to remove or alter invisible watermarks."

Monitoring audio and video can be harder due to the lack of an industry standard for AI companies to add invisible identifiers. "We can't yet detect those signals and label this content from other companies," Clegg pointed out.

Meta will enable users to voluntarily disclose when they post an AI-generated video or audio. According to the post, the company "may apply penalties" if they share a deepfake or other form of AI-generated content without disclosing it.

Meanwhile, Microsoft CEO Satya Nadella has stated that existing technology is capable of protecting the US elections from AI-generated deepfakes and misinformation.