Meta Platforms Inc's logo is seen on a smartphone in this illustration picture
Meta has disbanded its Responsible AI team to deviate its focus on generative artificial intelligence. Reuters

Meta has reportedly split up its Responsible AI (RAI) team, which was formed to regulate the safety of its artificial intelligence (AI) ventures.

According to an internal post cited by The Information, Meta will be putting more resources into generative artificial intelligence.

The report further suggests the company will move most of the RAI members to its generative AI product team, while others will work on the company's infrastructure.

Meta's commitment to Responsible AI

Meta claims it wants to be responsible when it comes to developing AI. In fact, there's a dedicated page on the company's official website that highlights its "five pillars of responsible AI" including accountability, transparency, safety, privacy and more.

According to Jon Carvill, who represents Meta, the company will "continue to prioritise and invest in safe and responsible AI development".

Although the company has broken up the RAI team, Carvill says those members will "continue to support relevant cross-Meta efforts on responsible AI development and use".

It is worth noting that Meta restructured the team earlier this year, which as per a Business Insider report included layoffs that left RAI "a shell of a team".

According to the report, the RAI team had little control over its own actions. Moreover, its initiatives underwent lengthy negotiations before they could be implemented.

The RAI team was formed back in 2019 to identify problems with Meta's AI training approaches. For instance, it checked whether the company's AI models were trained on a wide range of information in a bid to avoid moderation issues on its platforms.

When Meta's social platforms adopted automated systems there were problems like a Facebook translation issue that caused the arrest of a Palestinian man under false allegations.

Apparently, the Palestinian man posted "good morning" but Facebook translated his post as "attack them" and "hurt them".

Likewise, the WhatsApp AI sticker generation feature creates biased images when given certain prompts. Also, Instagram's recommendation algorithms were actively promoting paedophile networks.

Meanwhile, world governments are sparing no effort to create regulatory guardrails for AI development. In line with this, US President Joe Biden directed government agencies to develop AI safety guidelines earlier this year.

Similarly, the European Union published its AI principles and hasn't passed its AI Act yet. Last month, a source stated that Meta has proposed to offer European users a subscription-based version of its social media platforms if they did not want to be tracked for ads.

Aside from this, the company is reportedly gearing up to roll out generative artificial intelligence tools for all advertisers. These functions will come in handy for creating content like image backgrounds and producing variations of written text.