Elon Musk
Elon Musk blasts Sam Altman Gage Skidmore/Flickr CC BY-SA 4.0

The tech world just saw an escalation of one of the most closely watched corporate feuds. Elon Musk has taken aim at OpenAI's chief executive, Sam Altman, by challenging ChatGPT's safety record in a court filing in the United States.

In testimony released this week in his ongoing legal battle with OpenAI, Musk urged the court to consider the safety outcomes associated with ChatGPT, contrasting them with those of his own artificial intelligence system, Grok.

According to Musk, while lawsuits against OpenAI claim ChatGPT has been implicated in incidents of self-harm, his Grok platform has allegedly a cleaner track record, with no reports of similar harm linked to its use. These words form part of a broader narrative arguing that OpenAI's shift from its original non-profit roots towards commercialisation has compromised user safety in favour of speed and profit.

Musk's Safety Claims and Legal Strategy

In the deposition made public earlier this week, Musk did not hold back in criticising OpenAI and ChatGPT's safety record. As per reports, OpenAI is currently facing multiple lawsuits in the United States alleging that ChatGPT's responses may have contributed to severe psychological distress and, in some tragic instances, to self-harm and suicide.

Musk's blunt claim that 'nobody has committed suicide because of Grok' was made to underscore what he sees as a stark difference between the two AI systems. By framing the debate in terms of real-world harm, Musk is seemingly attempting to shift the legal attention onto safety and away from the technicalities of corporate governance or contractual disputes.

The Real Context of the Musk-Altman Feud

The backdrop to this courtroom clash is a deep and complex rivalry between Musk and Altman that stretches back many years. Once allies in the early days of OpenAI, the two figures have diverged on fundamental questions about the purpose and governance of AI. Musk's departure from OpenAI's board in 2018, followed by the launch of his own AI venture, xAI, set the stage for increasing competition. Since then, both leaders have traded public criticisms on social media and in interviews, with Musk condemning what he perceives as a departure from OpenAI's original safety-focused mission and Altman defending his decisions as necessary for innovation and competitiveness in a fast-evolving AI world.

OpenAI under Altman has forged major commercial partnerships and pursued technologies that have brought AI into the mainstream. ChatGPT remains one of the most widely used generative AI models in the world, integrated into numerous consumer and enterprise platforms.

OpenAI emphasises the development of safeguards and has stated its commitment to safety and ethical AI, even as it acknowledges the inherent challenges of deploying systems at scale. Despite this, the company continues to face legal and regulatory scrutiny over the behaviour of its models in real-world contexts, from worries about misinformation to allegations of psychological harm.

Musk's xAI, meanwhile, champions itself as a more transparent and safety-centric alternative, though it too has been criticised and investigated for safety lapses. Critics have noted that Musk's own AI projects have faced scrutiny, including an incident in which Grok generated non-consensual and inappropriate images, prompting regulatory investigations in the United States and Europe.