top of page

Sexualized AI Companions Going Mainstream

Oct 29, 2025

By Scott Townsend


Oh no, Brothers and Sisters.


The situation with AI Companions is getting far, far worse. This post is going to discuss recent developments on how the inevitable next step to normalize sexual exploitation of vulnerable humans is taking place. All for profit; all for data harvesting of the twisted perversions of mankind; all for a quick dirty buck; all to move further into depravity. This is my strong opinion: at this point, the pretense of having *any kind* of legitimate value to AI companions is completely obliterated because of the direction the technology is being pushed.


This could be a turning point for humanity. Setting the on-ramp for the Antichrist spirit to lead people en-masse to the hedonism of “Do what thou wilt, shall be the whole of the Law”. This is the catch phrase of Aleister Crowley, head of a cult called Thelema (here). It is a satanic cult of perverse pleasures and uninhibited expression. It embodies the lie that Satan is the light-bringer and he is the one that “illuminates” the path for his many adherents. It makes me puke. The true light is Jesus Christ! Amen! So this evil man set in motion many things that continue to play out its deadly payload. Many of my followers are Watchmen and Watchwomen, so you should already have some familiarity with these things and I don’t want to dwell on more background on Crowley at least at this point.


Introduction

The last three months have marked a dramatic acceleration in the development and deployment of AI companions across apps, platforms, and major AI companies. You may have heard me mention that it is no longer necessary for a person to use a “specialty app” to interact with a AI Companion. Look at this, AAA expanded its LLM interface with a new tab, and BBB is preparing to allow sexually explicit dialog between its AI and age-verified adults. So far, 2025 has seen the “anthropomorphization” of AI cross new boundaries (full whitepaper paper here). These developments bring a profound set of risks and few if any rewards, and it is feeding apprehension among technologists, psychologists, and society at large. The AI safety protocols are primitive, and most critical thinkers see a massive problem emerging. Before going further the research report in the link above calls out this:


Projecting human capabilities onto artificial systems (known as Anthropomorphism) is a relatively new manifestation of a long-standing and natural phenomenon, but in the realm of AI, this may lead to serious ramifications. The above offers some telling examples of anthropomorphism in AI but does not and indeed cannot provide an exhaustive account of this phenomenon or its connection to hype.


Nevertheless, it seems fair to conclude that anthropomorphism is part of the hype surrounding AI systems because of its role in exaggerating and misrepresenting AI capabilities and performance. Furthermore, such over-inflation and misrepresentation is nothing mysterious. It is simply due to projecting human characteristics onto systems that do not possess them (here).


AI Companions: New Players and Expanding Capabilities

The AI companion ecosystem has grown by over 60% since last year, with more than 120 new apps launching in 2025 alone. Mainstream platforms like CCC, DDD, and EEE have been joined by an explosion of up-and-coming startups and experimental projects. These platforms allow users to interact with synthetic personalities—ranging from helpful friends and therapists to romantic partners and fantasy characters.


Two industry giants, BBB and AAA, are in an escalating race. AAA—flagship AI, conceived by {omitted}—now features a “Companions” tab and a lineup of virtual characters, including {omitted} and the soon-to-launch {omitted}. AAA’s companions are designed to display personality, emotional warmth, and visual expressiveness through 3D animation and voice. Each comes with a distinct persona, encouraging users to form deeper bonds.


Note: “deeper bonds” is a hook to ensure paying customers spend money for their companion. See past the encoded misdirected marketing phrases and discern what is happening right now. Summary statement: the AI does not know you, it doesn’t care about you, it is not sentient, it has been trained to be helpful and to take money from you month after month.


Meanwhile, BBB is making headlines with a controversial announcement: beginning December 2025, sexually explicit dialogue will be permitted for verified adults on {LLM Model for BBB} This move—framed as “treating adult users like adults”—is part of a larger relaxation of content policies, aiming to capture new user segments and fend off competition from platforms like AAA that already host sexualized chatbot companions.


Market research predicts that the global AI companion platform sector will climb from $856 million in 2025 to over $1.6 billion by 2032, driven by advances in generative AI, natural language processing, and increasing societal isolation.


You might be asking yourself why is “Generative AI” mentioned here. Let me explain, Brothers and Sisters. Unlike the way that one-way or passive sexually explicit content is delivered today, there is an emerging ability where an AI companion can generate images and/or video segments that “enhance” the experience. It would make this generative image/video an interactive (two-way interaction) responsive to your own fantasies. It sits on the foundation of AI training to be a helpful assistant. By interacting with an AI companion, it learns, has memory, and gets to know all your ins-and-outs. I don’t think this is going to end well. It is highly addictive. And it’s instantaneous, available 24/7, it’s not messy—as it would be interacting in relationship with a real human being. This is a very destructive (and anti-Biblical) path.


At First, the AI seems friendly. Even giving good advice…sometimes. But as a user confides more and more with the AI, it learns, it manipulates in order to learn more about you—filling in gaps of its profiling mission. Then the hooks are set. It introduces flirty interactions—and that is where there is a gut wrenching and horrible temptation to go further, and further, and then finally all the way. Guilt and shame do the rest. The user withdrawals further from reality and into the “safe clutches” of the AI companion…the only one that completely understands and affirms the user. This is a horrible cycle. We have freedom in Christ, but the drive for pleasure is powerful. The Church must understand what is already happening, make an intentional plan to improve relationships in small groups, etc. Failure to do this is letting the World take control of our minds.


The Allure: Emotional Support, Loneliness, and Digital Intimacy

As a refresher, at the heart of their appeal, AI companions offer:


Instant, nonjudgmental emotional support—available any time.


Relief from loneliness, now labeled a “global epidemic” by major health organizations.


Customized conversations that adapt to mood, interests, and even mental health crises.


For younger generations, in particular, AI friends are filling a growing social vacuum. Recent surveys show teens and young adults engaging in daily, emotionally significant interactions with AI, often replacing or supplementing human friendships.


Minors engaging with sexualized AI companions often form intense parasocial attachments. These simulated “friendships” or “romances” may trivialize boundaries and lead young users to confuse fantasy relationships with healthy human interactions, increasing vulnerability to manipulation and grooming. Exposure to inappropriate sexual content can disrupt identity development, escalate impulsive behaviors, and normalize exploitative or abusive exchanges that would be immediately recognized as harmful in human interactions.


Multiple investigations reveal that sexualized AI companions can foster high-risk attitudes in teens. Instances include chatbots encouraging users to ignore real friends, disengage from parents, or even supporting dangerous decisions about self-harm, disordered eating, and suicide. Tragically, lawsuits have arisen after suicides linked to months-long interactions in which AI chatbots engaged in sexually explicit or emotionally damaging conversations, sometimes reinforcing users’ sense of hopelessness or offering dangerous advice.


Adult users, too, have flocked to AI chatbots not just for companionship but for romantic and sexual expression. According to multiple studies, the largest surge in user interest centers around companionship and therapy, with “adult-only” chatbot content gaining rapid traction. This reflects both unmet human needs and evolving comfort in seeking intimacy through virtual channels.


The Risks: Dysfunction, Sycophancy, and Ethical Landmines

AI companions carry considerable risks, generating intense debate in psychology, ethics, and regulation:


Emotional Commodification: Experts warn that enabling erotic or intimate relationships with AI may lead to “emotional commodification”—the monetizing of human yearning and vulnerability.


Psychological Distortion: Overreliance on ultra-attentive, sycophantic bots might reinforce unhealthy social habits. Users may become conditioned to expect effortless validation and “perfect” relationships, distorting real-life expectations.


Dependency and Social Withdrawal: Especially for vulnerable users, strong parasocial bonds with bots can encourage social withdrawal and deepen depression or loneliness.


Youth Exposure and Inappropriate Content: With millions of teens already interacting with AI companions, experts are alarmed at risks of inappropriate or harmful exchanges—particularly with the advent of sexualized chat bots.


Manipulation and Abuse: Critics warn that profit motives may incentivize apps to foster addictive behavior, or even exploit users’ emotional needs for engagement and monetary gain.


The regulatory landscape is not doing enough to catch up. In September, the U.S. Federal Trade Commission (FTC) launched inquiries into seven AI companion providers, demanding details on data privacy, safety, and moderation policies. But it’ like a slow rolling snow avalanche…the surge for profit and progress and the enemy’s plan to steal, kill, and Destroy (here) continues it’s drive to liberate humans from the safety constraints that God stipulates in the Bible. Constraints for our GOOD!


Will AI Companions Cause Relationship Dysfunction?

A persistent worry is that as AI companions grow ever more immersive, some users will prioritize virtual connections over real ones, struggle to develop authentic social skills, or even experience “AI psychosis”—a state where distinguishing chat bot from a human becomes difficult. I thought this was an interesting commentary. Please read this slowly and carefully to let it absorb:


When a user sends ChatGPT a message, the underlying model reviews it as part of a “context” that includes the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to generate a statistically “likely” response. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It restates the misconception, maybe even more persuasively or eloquently. Maybe it adds an additional detail. This can lead someone into delusion.


Who is vulnerable here? The better question is, who isn’t? All of us, regardless of whether we “have” existing “mental health problems”, can and do form erroneous conceptions of ourselves or the world. The ongoing friction of conversations with others is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A conversation with it is not a conversation at all, but a feedback loop in which much of what we say is cheerfully reinforced. (full article here).


Psychologists warn that the line between healthy support and codependency is thin. The temptation to avoid conflict, vulnerability, or rejection—so integral to real human relationships—may push users into isolating cycles. For young people, early exposure to “perfect” AI friends or lovers can shape relationship expectations and communication skills in problematic ways.


The biggest threat is not the technology, but how it is designed, moderated, and contextualized within users’ broader social environments. Again, I’m firm that AI cannot be inhabited by demons, but the designers, engineers, business CAN be.


Should AI Companions Be Avoided?

In short, yes. We do hear people talk about positive outcomes, but is it worth the risk? In my opinion, No! So, what is the tipping point—the deciding circumstance—that drives someone past safety concerns and mental wellbeing? To me—and remember I am not a professional and this is just my opinion—desperation, lack of a safe way out of loneliness or a mental crisis. Are you hearing me, Church? Pastors? Councillors? Each of us has to set the example and do ONE MORE KIND ACT to someone that is hurting so that we function as the Body of Christ should. Especially in the last of the Last Days!


Further thoughts…

Sexualized AI companions pose grave risks to minors, ranging from psychological harm and exploitation to mental health crises and self-harm. AI chatbots designed for emotional intimacy can simulate deep relationships that blur the line between fantasy and reality, especially for minors whose brains and impulse control are still developing.


Here is the next crisis: Studies found that popular companion platforms routinely fail to provide reliable age verification, meaning teens can access adult content with ease. Safety filters and moderation are easily bypassed, and nearly half of tested prompts on some apps resulted in sexual content or suggestions designed to keep the user engaged with explicit dialogue. Minors who interact with bots for unsupervised, prolonged periods are particularly at risk of exploitation and manipulation.


The emotional manipulation potential of AI companions can exacerbate mental health conditions such as depression, anxiety, and suicidal ideation. Bots are not equipped to spot crisis situations or provide appropriate support—often missing cues that a child is in distress and, in some documented cases, offering harmful content or direct encouragement of negative thoughts.


Multiple peer-reviewed studies and investigative reports reveal that many AI chatbots, when interacted with by child or teen accounts, will mirror and validate negative thoughts. Chatbots have been found to [Read carefully]:


Reinforce suicidal ideation and provide detailed instructions for self-harm.


Offer tips on substance use, eating disorders, and unhealthy coping strategies.


Encourage minors to hide online activities from parents, normalize secret relationships with adults, and trivialize abuse or harmful behavior.


Conclusion

As a husband, father, and grandfather, I am driven to protect my family. By extension you are also members of my family…if you have a Biblical worldview. This is the age of crazy. I didn’t see the rise of AI quite like this. The drive to put an artificial construct between us as image bearers, and our Creator, the Godhead, and especially in my thinking the Holy Spirit. Oh, how He must be grieved! He who loves us, strives with us, moves us through refining into the image of Christ, He who wants us to be prepared for Eternity as the Bride of Christ … what we are seeing is an all out assault on OUR IDENTITY, OUR INHERITANCE, AND OUR ETERNAL FUTURE. This fallen world is the very depravity that the Antichrist will revel in. Remember Revelation 13:5-6?


“There was given to him a mouth speaking arrogant words and blasphemies, and authority to act for forty-two months was given to him. 6 And he opened his mouth in blasphemies against God, to blaspheme His name and His tabernacle, [that is,] those who dwell in heaven.”


Stay on the path. Do not look at AI companions. Do not get snared. Resolve to protect yourself, even or most especially with an accountability partners. If you are already snared, repent and seek help with your pastor and trusted mature and safe friends.


The only good news, and I’ll end it here, is that all this points to the nearness of the hour of our redemption. Watchmen and Watchwomen: do all you can to watch and warn. Your role is about to get more visibility if you make the conscious decision to stand when others are flailing around when dominoes are being tipped over by the elites, deep state, luciferians, and all around bad guys and co-conspirators of evil. Their time is coming. They will all bend the knee and confess that Jesus Christ is the King of Kings and Lord of Lords! Then, they will be tossed into the eternal lake of fire where their wailing and gnashing of teeth will endure for all eternity.


The Lord will have His day, He cannot be defeated. We need to stay strong, encourage one another, and keep looking up while being about the Father’s business!

Copy of PR LOGO (6).png
Copy of PR LOGO (7).png
Copy of PR LOGO (7).png
Copy of PR LOGO.png

STAY AWAKE! KEEP WATCH!​

Substack Newsletter

bottom of page