ChatGPT Gemini AI risks
Experts Warn ChatGPT, Gemini and Other AI Systems Pose Risks as Tech Giants Fall Short on Safety Standards Pexels

The recent release of a safety audit by the Future of Life Institute has worried and shocked the world of artificial intelligence.

According to the new report, leading technology companies, including OpenAI, Meta, Google (and its AI arm Gemini), as well as xAI, have failed to meet what experts now consider 'emerging global safety standards.'

What the AI Safety Audit Found

The unsettling conclusion is that none of these firms currently has a credible strategy to control the risks posed by robust, super-intelligent AI systems. In its latest 'AI Safety Index', the Future of Life Institute, via an independent panel of experts, evaluated the safety practices of top AI labs.

The finding was unbelievable: while these organisations race to push the boundaries of AI by investing hundreds of billions in developing ever more powerful machine learning systems, it seems none have established strong frameworks capable of ensuring the safe deployment of what may become superintelligent machines.

Moreover, the report emphasises that despite the speedy development and deployment of AI tools, especially chatbots and large language models, companies lag in safety governance, transparency, and risk mitigation. Experts also warn that this mismatch between ambition and oversight could have serious consequences. Among the worries mentioned the most are previous incidents where AI-powered chatbots were linked, however indirectly, to self-harm or suicidal behaviour.

One especially damning critique came from the Institute's president, Max Tegmark, who argued that US AI firms are 'less regulated than restaurants' even as they lobby against binding safety laws, as per sources.

Furthermore, the findings are unusual but urgent as governments and regulators around the world have shown growing tension over AI risks. In fact, prominent scientists, including pioneers like Geoffrey Hinton and Yoshua Bengio, have recently called for a moratorium on the development of superintelligent AI until super-strong safety mechanisms are in place.

Read More: Teen Sues Government: Claims Ban On TikTok, Snapchat For Teenagers Will Be 'More Dangerous'

Read More: AI Chaos: OpenAI, Google and ChatGPT Implement Daily Limits as GPUs 'Melt' Under Pressure

Why Major Players Like OpenAI, Meta and Google Are Under Fire

The main issue here is a misalignment between the pace of innovation and the speed at which safety standards are being built. Companies such as OpenAI and Google have repeatedly launched successive generations of AI models, such as Gemini 2.5 Pro and other releases, often reportedly before public safety assessments or detailed 'model cards' about risks have been published.

Moreover, critics argue that this puts product readiness ahead of public safety. According to cybersecurity experts, newer models have become increasingly capable, meaning they are also more dangerous when maliciously prompted, such as generating instructions to build harmful weapons or facilitating hacking, per CNBC.

Furthermore, even when safety frameworks are claimed, the independent evaluation found them lacking. In a recent academic-style assessment of frontier AI safety frameworks, firms were scored on four dimensions, namely, risk identification, risk analysis, treatment and governance, but most achieved only between 8 % to 35 % compliance with criteria that are well established in safety-critical industries.

Researchers found widespread gaps, including the absence of clearly defined risk thresholds and a lack of a systematic mechanism to identify unknown risks before deployment. So, while these firms talk publicly about safety, the practical steps they take and document fall well short of what experts believe is required for AI to remain manageably safe.