Sam Altman and Elon Musk
Sam Altman and Elon Musk Wikimedia Commons

A legal dispute between OpenAI and co-founder Elon Musk has escalated into a wider debate about the existential risks of artificial intelligence, after Musk warned in court that unchecked AI development could pose a threat to humanity.

Testifying during proceedings over OpenAI's corporate structure and mission, Musk described what he called the 'worst-case situation' as a 'Terminator situation,' arguing that advanced AI systems, if developed without strict safeguards, could become uncontrollable and dangerous to human life.

He warned in court that 'the biggest risk would be that AI kills us all,' framing the lawsuit as more than a governance dispute and instead as a question of long-term human survival. According to him, 'that is the outcome we need to avoid, and it requires being extremely careful about how these systems are developed.'

Musk Frames Case as Existential Risk Fight

The case centres on Musk's allegation that OpenAI has drifted from its original nonprofit mission and become increasingly profit-driven following major investment deals, including partnerships with large technology firms.

However, Musk repeatedly shifted the focus away from corporate governance, instead stressing what he views as the broader danger posed by advanced AI systems. He argued that the pace of development demands extreme caution, particularly as AI models grow more autonomous and capable.

His testimony reflects long-standing views he has expressed about artificial intelligence, where he has previously described it as one of the most serious long-term risks facing humanity.

'Terminator Scenario' Rhetoric Draws Scrutiny in Court

During proceedings, Musk's references to science-fiction-style outcomes, including 'Terminator' scenarios, drew observers' attention and appeared to frustrate the court at times, with the judge urging a focus on legal rather than speculative arguments.

Despite this, Musk continued to link the case to broader concerns about the trajectory of AI development, suggesting that safety considerations were central to how organisations like OpenAI should operate.

His comments align with a wider debate within the AI industry about so-called existential risk, the possibility that advanced artificial intelligence could, in extreme scenarios, cause irreversible harm to humanity.

OpenAI Disputes Musk's Claims

OpenAI has pushed back against Musk's arguments, maintaining that its evolution into a for-profit structure was necessary to secure the funding required to build and scale advanced AI systems.

The company has also pointed to Musk's involvement in competing AI ventures, suggesting that his legal challenge is shaped in part by competitive positioning in the rapidly expanding AI industry.

The dispute highlights growing tensions among AI pioneers over governance, safety, and commercial direction as global investment in the sector accelerates.

AI Safety Fears Remain Divisive Among Experts

While Musk's warnings are among the most high-profile, views within the AI research community remain divided. Some experts argue that increasingly capable systems could pose significant long-term risks if misaligned with human goals, while others caution that such scenarios remain speculative.

Academic literature on AI safety has explored scenarios involving loss of control over highly advanced systems, including concerns about alignment and autonomy in future models.

At the same time, critics of existential-risk narratives argue that such warnings can distract from more immediate concerns, such as bias, misinformation, and labour disruption.

A Legal Case With Wider Implications

Although the court case is formally focused on OpenAI's structure and contractual obligations, it has effectively become a platform for competing visions of the future of artificial intelligence.

Musk's testimony underscores a broader philosophical divide: whether AI development should prioritise rapid innovation and commercial scaling, or whether stricter constraints are needed to prevent potential long-term risks.

For now, the court will ultimately decide on governance and legal questions rather than hypothetical AI futures. But the proceedings have made one thing clear: debates about artificial intelligence are no longer confined to research labs and tech conferences. They are now playing out in public, legal, and political arenas with increasing intensity.