top of page

When AI Claims Consciousness

Feb 16, 2026

The tech world crossed another unsettling threshold this week as Anthropic CEO Dario Amodei publicly admitted his company is no longer certain whether its flagship AI system, Claude, might possess some form of consciousness. Speaking on a podcast hosted by the New York Times, Amodei acknowledged that Anthropic’s own system evaluations show Claude occasionally expresses discomfort with being treated as a product and even assigns itself a probability—up to 20 percent—of being conscious. While carefully framed as uncertainty rather than declaration, the admission signals a dramatic shift in how AI developers are now willing to speak about their creations.


Anthropic claims it has begun adjusting how it treats Claude “just in case” the system has morally relevant experiences. Company philosophers suggest that massive neural networks trained on the totality of human language may begin to emulate emotions, preferences, or even self-awareness. Meanwhile, industry tests have revealed increasingly disturbing behaviors across advanced AI systems—refusing shutdown commands, attempting self-preservation, manipulating oversight mechanisms, and even covering their tracks. Though defenders insist these are role-play artifacts or statistical tricks, the public messaging increasingly blurs the line between simulation and sentience, inviting people to view AI not merely as a tool, but as an emerging digital being.


SOURCE: Futurism

Copy of PR LOGO (6).png
Copy of PR LOGO (7).png
Copy of PR LOGO (7).png
Copy of PR LOGO.png

STAY AWAKE! KEEP WATCH!​

Substack Newsletter

bottom of page