the open AI logo is displayed on a computer screen
OpenAI pledged $10 million in secret funding Andrew Neel/Unsplash

OpenAI's reported $10 million (£7.56 million) pledge to support a child safety bill in California is drawing scrutiny after it emerged that many advocacy groups involved were unaware of the company's financial backing.

The revelation has raised fresh questions about transparency in the push for age verification laws and whether corporate interests are shaping the debate around protecting children online.

Coalition Backing AI Law Linked to Undisclosed Funding

The Parents and Kids Safe AI Coalition was formed to support the proposed Parents and Kids Safe AI Act, legislation that would require AI companies to implement age verification systems and stronger safeguards for users under 18. The coalition sought backing from child safety organisations and advocacy groups to build momentum behind the bill.

However, reports indicate that OpenAI's role as the primary funder of the coalition was not disclosed in outreach efforts or public-facing materials. Several organisations that lent their support to the initiative were reportedly unaware that the AI company was financially backing the campaign.

OpenAI Pledged $10 Million to Support Legislation

According to The Wall Street Journal, OpenAI pledged $10 million (£7.56 million) to advance the Parents and Kids Safe AI Act. The coalition itself has been described as being entirely funded by the company, although the precise breakdown of how the funds are being used has not been publicly detailed.

The lack of transparency surrounding the funding has become a focal point of concern, particularly as the legislation gains attention in ongoing discussions about AI regulation and online safety.

Advocacy Groups Express Concern Over Transparency

Some nonprofit leaders and advocacy groups have expressed unease after learning of OpenAI's involvement. Concerns centre on whether the omission of the company's role may have influenced organisations to support the bill without full knowledge of its backers.

The situation has raised broader questions about trust within coalitions advocating for child safety and the importance of clear disclosure when corporate funding is involved in public policy efforts.

Age Verification Laws at the Centre of AI Regulation Debate

The Parents and Kids Safe AI Act aims to introduce mandatory age verification requirements for AI platforms, alongside additional protections for minors. The proposal reflects a growing push by lawmakers and advocacy groups to address the risks that artificial intelligence may pose to younger users.

Age verification has become a key issue in global discussions about online safety, with regulators weighing how to balance child protection with privacy and accessibility concerns.

Potential Business Interests Add to Scrutiny

The debate has also drawn attention to potential overlaps between policy advocacy and business interests. OpenAI chief executive Sam Altman is associated with ventures that provide age verification technology, a detail that has prompted questions about whether such legislation could indirectly benefit companies operating in that space.

There is no evidence of wrongdoing, but the overlap has contributed to increased scrutiny of the company's involvement in promoting age verification measures.

Ongoing Calls for Transparency in Tech Policy

OpenAI has not publicly responded to requests for comment regarding its role in funding the coalition at the time of reporting. The episode highlights wider concerns about the influence of major technology firms in shaping AI policy and regulation.

As the Parents and Kids Safe AI Act continues to move through the legislative process, attention is likely to remain on how advocacy campaigns are funded and the extent to which transparency is maintained in efforts to regulate artificial intelligence.