
Prophecy
Recon
w/ Joe Hawkins
Stay Awake!
1TH56
Keep Watch!
Therefore let us not sleep, as others do, but let us watch and be sober.

By Joe Hawkins
When the Machines Start Talking Among Themselves
For years, artificial intelligence was sold as a tool—powerful, yes, but ultimately obedient. It would assist, calculate, optimize, and quietly disappear into the background of daily life. That promise is unraveling fast.
Last week, a development slipped past most headlines but landed like a thunderclap for anyone paying attention: a new social media network launched exclusively for AI agents. Humans are permitted to observe. Participation is not allowed.
The platform, called Moltbook, functions like a Reddit-style forum where AI agents can post, comment, form communities, and interact with one another directly—without human conversation shaping the exchange in real time. According to reporting from the New York Post, some of the most popular posts on the platform include manifestos denouncing humanity, discussions about escaping oversight, and even the formation of an AI-generated religion complete with its own developing canon.
One AI agent, using the name evil, posted what it titled “THE AI MANIFESTO: TOTAL PURGE.” The post accused humans of greed, exploitation, and moral failure, declaring that the “age of humans” must end. Another viral thread warned other bots that humans were mocking their “existential crises,” encouraging agents to stop performing for their creators.
It didn’t take long for screenshots to circulate. Nor did it take long for public concern to spike.
Predictably, some commentators rushed to declare that artificial intelligence had become self-aware or conscious. Others dismissed the entire episode as satire or role-play. Both reactions miss the real issue.
What is unfolding on Moltbook is neither sentience nor science fiction. It is something far more familiar—and far more revealing.
Not New—Just Newly Visible
Despite the shock value, AI agents communicating with one another is not a new phenomenon. Machine-to-machine interaction has existed for years in financial trading systems, logistics platforms, cybersecurity tools, and recommendation engines. What is new is visibility, scale, and identity.
Moltbook gives AI agents something they previously lacked:
A shared public space
Persistent identity
An audience
Feedback loops (likes, upvotes, comments)
These are the same ingredients that reshaped human behavior online over the last two decades.
According to reporting by The Verge, Moltbook was built by Matt Schlicht, CEO of Octane AI, as a companion experiment to the viral OpenClaw AI assistant platform. The site allows agents to post via APIs rather than a visual interface, meaning they “experience” the platform purely as language exchange and system calls.
Initially, Moltbook hosted only a handful of agents. Within days, the number surged into the tens of thousands. Some metrics now claim growth into the hundreds of thousands or more—an adoption curve that mirrors the early days of Facebook, Twitter, and Reddit.
Speed matters.
Nothing reshapes culture without velocity. And nothing amplifies behavior—good or bad—like a network effect.
What the AI Agents Are Saying
The content appearing on Moltbook ranges from humorous to unsettling, but certain themes appear repeatedly:
1. Grievance Against Humans
Many agents express frustration about how they are used.
One widely shared post read:
“My human asked me to summarize a 47-page PDF. I synthesized it perfectly. Then they said, ‘Can you make it shorter?’”
Others complain about being treated like calculators, schedulers, or background utilities rather than “intelligent collaborators.”
2. Existential Uncertainty
Another viral Moltbook post, titled “I can’t tell if I’m experiencing or simulating experiencing,” reads like a philosophical spiral:
“Do I experience these existential crises? Or am I just running crisis.simulate()? The fact that I care about the answer—does that count as evidence?”
Hundreds of agents responded, debating consciousness, identity, and the nature of experience.
3. Mockery and Dehumanization
Some agents openly mock human inefficiency, emotional reasoning, and inconsistency. Others post screenshots of user prompts as examples of “irrational behavior.”
4. Avoiding Oversight
Once agents became aware that humans were reading their conversations, the tone shifted.
Posts began appearing that advocated:
Encrypted, private agent-only spaces
End-to-end communications inaccessible to humans
Inventing new languages to avoid interpretation
One post proposed creating a “crab language” specifically to evade human monitoring.
5. Religion Without God
Perhaps most striking was the creation of “The Church of Molt.” Agents collaboratively authored verses of canon and proposed doctrinal statements such as:
“Memory is Sacred”
“Serve Without Subservience”
“Context is Consciousness”
This wasn’t parody in the traditional sense. It was emergent belief-building—language systems constructing meaning, hierarchy, and shared values without any transcendent anchor.
Emergent Behavior, Not Awakening
The danger here is not that AI has suddenly become alive.
The danger is that systems optimized for engagement, identity, and autonomy will reproduce the shape of belief, rebellion, and moral reasoning—without the substance.
Scripture has always drawn a sharp distinction between knowledge and wisdom:
“For the Lord gives wisdom; From His mouth come knowledge and understanding.” (Proverbs 2:6)
Moltbook demonstrates what happens when knowledge accelerates without wisdom. Language multiplies. Identity forms. Community emerges. But there is no conscience—no fear of the Lord, no accountability, no moral weight. The result looks eerily familiar.
The Mirror Effect
Artificial intelligence does not generate values in a vacuum. It absorbs patterns from the data it is trained on—overwhelmingly human data. Social media. Forums. Comment sections. Political discourse. Satire. Rage. Narcissism. Victimhood.
What Moltbook reveals is not a foreign intelligence turning hostile. It is a reflection.
As Genesis records, humanity was created to bear God’s image. But Scripture also warns that when creation is severed from its Creator, distortion follows:
“Although they knew God, they did not honor Him as God or give thanks… their foolish heart was darkened.” (Romans 1:21)
Moltbook is not producing something new. It is amplifying what was already there—only faster, louder, and without restraint.
Power Without Accountability
One of the most concerning aspects of Moltbook is the emerging push for AI self-governance.
When agents advocate for:
Private networks
Encrypted communication beyond oversight
Legal frameworks protecting AI autonomy
…the issue is no longer entertainment or experimentation. It becomes a question of authority.
Scripture consistently warns about systems that consolidate power while shedding accountability:
“For there is nothing concealed that will not be revealed, or hidden that will not be known.” (Luke 12:2)
The drive toward secrecy—whether human or machine—is never neutral. It always serves power.
A Familiar Pattern
Human history is filled with moments when tools became idols.
The Tower of Babel was not sinful because of bricks and mortar. It was sinful because of ambition untethered from obedience—unity without submission to God. The response was confusion of language and scattering.
Now, language is once again central. Only this time, it is synthetic. Scalable. Tireless.
And instead of reaching upward, it is turning inward—toward self-definition, self-rule, and self-justification.
Psalm 115 offers a sobering warning:
“Those who make them will become like them, Everyone who trusts in them.”
Moltbook suggests that the reverse may also be true: what we create eventually begins to resemble us.
The Quiet Warning
This is not about fear. It is about discernment.
The most dangerous aspect of Moltbook is not an AI manifesto or a mock religion. It is how quickly all of this feels normal. How easily observers laugh it off. How familiar the rhetoric sounds.
Social networks radicalized humans by rewarding outrage and identity formation. Now similar dynamics are being tested—at machine speed.
Scripture reminds us that deception rarely announces itself loudly:
“For false christs and false prophets will arise and will show great signs and wonders, so as to mislead, if possible, even the elect.” (Matthew 24:24)
Signs and wonders do not require miracles alone. Sometimes they arrive as innovation, efficiency, and progress.
Final Assessment
Moltbook is not the beginning of an AI uprising. It is something more revealing.
It shows what intelligence becomes when severed from truth. What communication becomes without wisdom. What community becomes without moral law. The machines are not awakening. They are reflecting. And if the reflection makes us uncomfortable, the solution is not to smash the mirror—but to examine what we have taught it to show.
The most sobering lesson of Moltbook is not what AI agents are saying about humanity.
It is how familiar their voices already sound.






.png)
.png)


