AI Recruiter Glitches, Repeating 'Vertical Bar Pilates!' 14 Times
An illustration of artificial intelligence holding out its hands Freepik

Anxiety around artificial intelligence has reached a new pitch after Anthropic's chief scientist Jared Kaplan warned humanity could face 'destruction' if control over advanced AI systems slips away, a moment that comes as unease inside leading labs continues to surface.

Kaplan's remarks arrive amid growing tension across the AI industry, with safety concerns no longer confined to academic debate. Instead, they are being voiced publicly by senior figures building the technology itself. That atmosphere has only sharpened following the recent resignation of a senior OpenAI researcher, an exit widely seen as another signal of internal strain over the direction and speed of AI development.

Anthropic Scientist Points to a Near-term Tipping Point

In an interview cited by Futurism's report on Anthropic's AI warnings, Kaplan said humanity is approaching a decisive moment. He suggested that as early as 2027, and likely by 2030, society will face a choice over allowing AI systems to train themselves without human oversight.

Kaplan described this prospect as an 'extremely high-stakes decision'. Once AI systems begin improving themselves autonomously, he warned, their capabilities could accelerate rapidly in what researchers call an intelligence explosion. That process could unlock major scientific breakthroughs, but it could also place humans in a position where understanding or controlling AI behaviour becomes impossible.

'You don't really know where you end up,' Kaplan said, adding that loss of oversight is the core danger rather than malice from the machines themselves.

Fear Centres on Self-training AI Systems

At the heart of Kaplan's concern is recursive self-improvement, a scenario where AI systems design and refine new versions of themselves. While AI models already help train smaller systems through a method known as distillation, Kaplan warned that removing humans entirely from the loop raises unprecedented risks.

He stressed that the critical issue is not raw intelligence alone, but alignment. According to Kaplan, the fundamental question is whether advanced AI systems will remain beneficial, harmless, and respectful of human agency once their learning processes become opaque.

That uncertainty explains why even researchers optimistic about AI's potential remain uneasy about the next phase of development.

Job Disruption Fears Add to Industry Pressure

Kaplan also echoed concerns raised by Anthropic chief executive Dario Amodei about labour disruption. He said AI could perform most white-collar work within two to three years, a shift that could reshape economies at remarkable speed.

Similar warnings have been issued by other prominent figures. Geoffrey Hinton has repeatedly cautioned that AI could destabilise society, while OpenAI chief executive Sam Altman has predicted widespread job displacement. Together, these statements paint a picture of an industry racing ahead while openly questioning its own impact.

The resignation of an OpenAI researcher has further fuelled speculation that internal disagreements over safety and deployment are becoming harder to manage. Although details around the departure remain limited, the timing has reinforced perceptions of mounting ethical tension inside top AI labs.

Critics Warn of Hype Masking Immediate Harms

Not everyone agrees that apocalyptic scenarios deserve centre stage. Some experts argue that focusing on distant existential threats distracts from present-day problems, including environmental costs, copyright disputes, and the reliability of AI systems already in use.

Researchers such as Yann LeCun have questioned whether current large language models are even capable of evolving into the kind of autonomous systems Kaplan fears. Others point to mixed evidence on productivity gains, noting cases where companies replaced workers with AI only to reverse course when the technology fell short.

Kaplan acknowledges that progress could slow. However, he remains convinced that AI will continue improving, making the decisions taken in the next few years critical. As more insiders voice concern and departures from leading firms draw attention, the sense of AI panic shows little sign of easing.