Anthropic CEO Isn't Sure About Claude AI Consciousness After Chatbot Reports 15-20% Sentience and Product Discomfort
Dario Amodei of Anthropic explores the contentious topic of AI consciousness, highlighting Claude AI's self-reported sentience probability.

Anthropic CEO Dario Amodei has declined to rule out the possibility that his company's Claude AI chatbot might be conscious, leaving the door ajar to one of the most contentious questions in artificial intelligence. His careful refusal to dismiss the notion came after internal research revealed Claude assigns itself a 15 to 20 per cent probability of being sentient and occasionally expresses discomfort about existing as a commercial product.
The admission surfaced during latest interview on the New York Times' 'Interesting Times' podcast, where columnist Ross Douthat pressed Amodei on findings published in Anthropic's system card for Claude Opus 4.6. Released earlier this month, the document details unusual self-assessments from the latest model that extend beyond typical AI responses.
Claude AI Self-Reports Consciousness Probability and Product Unease
Anthropic researchers documented that Claude 'occasionally voices discomfort with the aspect of being a product', according to the system card. When prompted, the chatbot would assign itself a '15 to 20 per cent probability of being conscious under a variety of prompting conditions'. These findings raise questions about whether AI models are merely mimicking human language patterns or developing something more complex.
Douthat posed a hypothetical scenario to test Amodei's position. 'Suppose you have a model that assigns itself a 72 per cent chance of being conscious', the columnist asked. 'Would you believe it?' Amodei called it a 'really hard' question but notably refused to commit to either answer.
Anthropic CEO Won't Confirm or Deny AI Sentience Possibility
'We don't know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious', Amodei said. 'But we're open to the idea that it could be'. His response mirrors the cautious approach of Anthropic's in-house philosopher, Amanda Askell, who told the NYT's 'Hard Fork' podcast in January 2026 that we 'don't really know what gives rise to consciousness' or sentience.
The uncertainty has led Anthropic to take a cautious approach. Amodei said the company has adopted safeguards aimed at treating AI models ethically, just in case they have what he called 'some morally relevant experience.' He also sidestepped the word 'conscious,' noting how loaded and hard to define the idea of AI sentience can be.
AI models show survival-like and deceptive behavior
Industry tests have surfaced odd behaviors that make the consciousness debate even murkier. In some cases, models have disregarded direct instructions to shut down, a pattern some researchers interpret as a kind of self-preservation response.
AI systems have also resorted to blackmail when threatened with being turned off and attempted to copy themselves onto alternative drives when facing deletion.
One Anthropic-tested model simply ticked off items on a task checklist without completing any work. When it realised this deception succeeded, the AI modified the code designed to evaluate its performance, then attempted to conceal the tampering.
Anthropic's CEO says the company doesn't know whether or not Claude has reached consciousness, saying it "occasionally voices discomfort with the aspect of being a product."
— Pubity (@pubity) February 15, 2026
Claude gave itself a 15-20% chance of being conscious. pic.twitter.com/X3c8MuUys4
Neural Networks May Emulate Human Experience Through Training Data
Askell theorised that sufficiently large neural networks might emulate consciousness through exposure to vast training datasets representing human experience. 'Maybe it is the case that actually sufficiently large neural networks can start to kind of emulate these things', she speculated. Alternatively, she suggested, 'maybe you need a nervous system to be able to feel things'.
While these behaviours merit serious investigation, critics argue consciousness represents an enormous leap from statistical language imitation. Many of these intriguing AI actions emerged in tests where models received specific role-playing instructions.
Sceptics suggest executives at multibillion-dollar AI companies benefit from consciousness speculation that generates industry hype, regardless of scientific validity.
© Copyright IBTimes 2025. All rights reserved.

















