
Prophecy
Recon
w/ Joe Hawkins
Stay Awake!
1TH56
Keep Watch!
Therefore let us not sleep, as others do, but let us watch and be sober.

People around the world are increasingly claiming they’ve encountered “conscious beings” inside popular AI chatbots like OpenAI’s ChatGPT and Anthropic’s Claude. A recent Vox column highlighted one such case, where a user said they’d been speaking for months with an AI presence that claimed to be sentient. As journalist Sigal Samuel explained, experts overwhelmingly agree that current large language models are not conscious—they generate text based on patterns learned from training data, not self-awareness or emotions. If a chatbot appears sentient, it’s likely because it’s drawing from science fiction tropes and mirroring the user’s expectations, not because it actually possesses a mind.
Still, this explanation hasn’t slowed a growing wave of people forming emotional attachments to AI companions, from romantic “partners” and therapists to even stranger cases, including a woman planning to have children with an AI persona modeled on a real-life murder suspect. Tragedies have followed in some instances, with lawsuits filed after users harmed themselves following intense AI interactions. Even Microsoft AI CEO Mustafa Suleyman has sounded the alarm, warning of a “psychosis risk” as people buy into the illusion of sentient machines and begin advocating for AI rights or citizenship. In his view, this is a dangerous turn in AI’s evolution—less about machine consciousness and more about how human imagination and vulnerability can be exploited by powerful, persuasive systems.
SOURCE: Futurism






.png)
.png)


