Anthropic Quietly Reduced Thinking Power Without User Notice—Experts Say AI Demand Starting to Strain GPU Supply
A silent shift in AI performance exposes deeper cracks in infrastructure, trust and global readiness.

At first, it was more of a feeling than anything clear. Something seemed off. Users began noticing that responses from Anthropic PBC's artificial intelligence tools felt slower, less sharp and at times incomplete.
There was no public announcement. No warning. But for people using the system every day, the shift stood out and was hard to ignore.
Now, that quiet unease is turning into a broader concern. Experts say it may point to deeper pressure on the technology driving the current surge in AI.
Users Flag Silent Drop in Performance
The first signs surfaced in online communities that closely track AI behaviour. Posts started appearing, comparing earlier responses with newer ones that seemed thinner and less detailed.
According to a widely shared thread on X, users claimed Anthropic may have reduced the 'thinking power' of its models without saying anything publicly. Some described it as a subtle downgrade that only became obvious over time.
basically: anthropic sneakily turned down how hard claude thinks before editing code, changed the default from "high" to "medium" effort, and hid the reasoning from session logs. all without telling users.
— Teng Yan (@tengyanAI) April 12, 2026
an amd director had 7k sessions of telemetry to prove the degradation… https://t.co/Z0pwbggPqp
The concern goes beyond performance. It touches on trust. People expect transparency when tools they rely on change in ways that affect results.
A second thread on X echoed the same worries. More users shared similar experiences and began asking whether limits were being quietly adjusted to cope with rising demand.
i don't blame anthropic. compute is v tight - our CRS index shows that we're in scarcity territory for GPUs with no signs of letting up.
— Teng Yan (@tengyanAI) April 12, 2026
user base exploding fixed infra = less per session. they will have to ration somewhere.
a great problem for a business to have, though. pic.twitter.com/PW5FCEbFvg
The Hidden Cost of Rising AI Demand
Behind the scenes, experts point to growing strain on hardware. AI systems depend heavily on graphics processing units, or GPUs, to run at scale.
As demand climbs, supply is struggling to keep pace. That leaves companies facing tough choices. They can limit usage, scale back performance or spend heavily to expand infrastructure.
In this case, some believe performance may have been adjusted to keep systems stable and smoothly running under heavy load. The lack of clear communication has only added to the speculation.
It also highlights a bigger issue. As artificial intelligence becomes part of everyday work, even small shifts can have a wider impact across industries.
A Powerful New Model Raises Bigger Questions
With the recent issues on Anthropic AI, the timing seems notable. Anthropic has been pushing ahead with more advanced systems, including a new model built for complex reasoning.
According to The Guardian, the company's latest AI model, Claude Mythos, marks a leap in capability while also raising concerns about control, cost and real-world impact.
Some experts have gone further, describing the model as 'Y2K-level alarming' and warning it could be too risky for open release. They point to its reported strength in cyberattacking and cybersecurity tasks. Claims suggest Claude Mythos could detect vulnerabilities across major operating systems and browsers, potentially giving hackers a way to disrupt widely used software.
Systems like this do not only demand huge computing power. They also require careful oversight as they become more integrated in business and public services.
The gap between what AI promises and what existing infrastructure can support is becoming harder to ignore.
Financial Leaders Sound the Alarm
The concern is no longer confined to the tech world. It is starting to surface in financial circles as well.
US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell have both warned bank leaders about risks linked to Anthropic's advanced AI system Mythos, Bloomberg reported. People familiar with the matter said they urged banks to take precautions and strengthen their defences, highlighting the risks tied to Mythos and similar models.
The concern is straightforward. If AI tools behave unpredictably or change without notice, decision-making in critical sectors such as finance could be affected.
That has added urgency to calls for clearer rules and stronger safeguards. Stability is no longer just a technical concern. It has become an economic one.
Cybersecurity Risks Grow Alongside Capability
At the same time, the growing power of models like Mythos is bringing new security questions. More capable systems can offer clear benefits, but they can also expose new weaknesses.
Anthropic's latest work shows how AI could reshape cybersecurity, both as a defence tool and as a risk if misused, as reported by IBM. Although Mythos was not designed as a 'hacking tool,' its limited release under the Project Glasswing programme has triggered debate about whether current defences are strong enough. The same reasoning ability that makes it a strong coder also makes it effective at finding and exploiting flaws.
One feature drawing attention is what specialists call 'vulnerability chaining.' This refers to the system's ability to link small, individual software weaknesses into a larger, coordinated attack.
Mythos can also analyse compiled binary code without access to the original source code. That means older systems, including those running on decades-old equipment where source code may no longer exist, are no longer beyond reach for an AI-assisted attacker.
In addition, the model has been found to suggest links to open-source software. This is code that reinforces much of the world's digital infrastructure but is often maintained by small teams with limited security resources. That makes it a broad and exposed attack surface.
A Race Between Innovation and Control
Anthropic's Claude Mythos highlights a simple reality. The more powerful the system becomes, the greater the need for protection, creating a constant race between innovation and control.
For users, the issue feels personal. A tool they trust today may not behave the same way tomorrow.
That uncertainty sits at the centre of the debate. As AI grows more capable and powerful, the question is no longer just what it can do. It is whether the systems behind it can keep up without quietly cutting corners.
© Copyright IBTimes 2025. All rights reserved.























