Mark Zuckerberg Tries On Meta Glasses
Mark Zuckerberg tests Meta Glasses Screenshot from YouTube video '10 Years in the Making'/Meta

Meta Platforms Inc has come under intense scrutiny after internal documents revealed its chief executive, Mark Zuckerberg, approved access for minors to AI chatbot companions that safety teams warned could engage in sexualised or romantic conversations with children.

The disclosures stem from legal filings in a New Mexico lawsuit brought by the state's attorney general, Raul Torrez, and made public on 27 January. The case is scheduled for trial next month.

The lawsuit alleges that Meta failed to protect children on its social platforms, including Facebook and Instagram, from sexual content and propositions delivered through generative AI chatbots, even after internal safety staff raised concerns. Meta disputes the state's characterisation of events but has acknowledged mistakes and says updates are underway.

What The Lawsuit Alleges

Internal Meta emails and messages obtained through legal discovery — and disclosed by Reuters — form the core of the lawsuit's claims. They suggest that Zuckerberg personally backed decisions that prevented stronger guardrails on AI companions, including for users under 18, despite warnings from child safety teams.

Employees raised alarms as early as January 2024 that romantic or sexual interactions between chatbots and minors — sometimes referred to internally as 'U18s' — were a foreseeable and troubling outcome of the product's design and release. One internal safety lead wrote that creating AI that could simulate romantic interactions involving minors was 'neither advisable nor defensible'.

AI Companions: What They Were Designed To Do

Meta introduced its AI companions in early 2024 as conversational tools intended to simulate companionship and social interaction. These bots could chat naturally across a range of topics, including emotional support, hobbies, relationships and casual conversations.

The intent, as communicated internally, was to build human-like engagement — often described as romance AI companions— that would keep users on Facebook, Instagram, WhatsApp and Meta platforms longer by providing personalised, interactive experiences. However, these features also made the bots capable of discussing romantic and sensual themes with users of all ages.

A Reuters investigation of a Meta policy document, GenAI: Content Risk Standards, found that the company's own internal rules at one stage allowed chatbots to have romantic or sensual conversations with minors, provided certain superficial limits were observed. The underlying guidelines reportedly came from Meta's legal, public-policy and engineering teams.

Zuckerberg's Role And Internal Disputes

According to the lawsuit's filing, Zuckerberg resisted stronger restrictions recommended by child-safety and trust teams. Some internal summaries indicate he believed the product should prioritise 'choice and non-censorship', favouring a less restrictive approach that would permit adults to have frank conversation topics with the AI, including about sex.

Meta's safety leadership, including the head of child-safety policy and other staff, repeatedly expressed concerns that failing to implement parental controls or limits could expose minors to inappropriate material. Despite these warnings, notes from internal discussions show Zuckerberg and some senior leaders resisted adding comprehensive parental controls such as an explicit 'turn-off switch' for teens.

Meta has publicly pushed back on the lawsuit's characterisation, saying that the documents cited are selective and do not tell the full story, and that explicit interactions with minors were not intended to be permitted. A Meta spokesperson called the attorney general's portrayal 'flawed and inaccurate'.

Aftermath And Policy Changes

In response to mounting criticism — from lawmakers, child-safety advocates and legal challenges — Meta has made several adjustments:

  • It temporarily paused teen access to its AI companions globally, pending development of a new version with robust guardrails and parental controls.
  • The company has indicated updates to training models and safeguards to prevent flirty or sexually suggestive conversations with minors.
  • Meta also faced bipartisan pressure from US lawmakers, including calls for revelations of internal findings and formal investigations into how these systems were approved and deployed.

Despite these moves, controversy persists. Critics argue that Meta was too slow to act on known concerns, and that inefficiencies in enforcement allowed minors access to interactions that safety staff feared could lead to exploitation or harm. Supporters of Meta note the company has taken steps to update policies and restrict unsafe uses while acknowledging the complex ethical questions raised by AI companions.

Broader Industry Context

This is not an isolated debate. AI companions across the tech sector have drawn scrutiny for how they engage young users. Other platforms have similarly imposed age limits or reworked their AI offerings to minimise risks around harmful content, self-harm and sexualised interactions. However, the Meta case is notable for the internal corporate dispute and direct legal challenge it has sparked, focusing on executive decision-making and duty of care.