top of page

Can AI Be Possessed By Demons? (1 of 4)

Apr 2, 2025

Can AI Be Possessed By Demons? (1 of 4) by Scott Townsend


This article can be viewed in its entirety HERE.


Notes from Scott

Having pushed the ball forward on AI intrusion, the next thing I want to discuss is repeated assertions from the Watchman community that AI can be inhabited by demons. As you may remember on my various interviews, I don’t agree with this. But why? This 4-part series will attempt to answer that.


Let me begin with my credentials so that you understand my technical background. Now, admittedly, this doesn’t mean I am right, but what it should do is provoke some reputation points to my opinion. I am a serial sometimes parallel software entrepreneur, often in small-team startups. I’ve had titles such as President, Technical Co-Founder, Chief Technology Officer, VP Technology, and Principal Data Architect. Why I am typically hired is because I am very good at creating new intellectual property. What this means is the work I do creates NEW value proposition for customers and users.


Now, let me give you my background in using AI. I began working with OpenAI and Anthropic Claude about June 2023 when I began working on a blockchain project that I can’t disclose right now. In April 2024 the Lord put it on my heart to create the first ever news aggregation app using 80,000+ news sources. News is news, but here’s what’s on my heart. From my own experience, it’s getting harder to go to multiple websites looking for news correlated to a Biblical Worldview and to the search terms important to the Watchman community. There is much more to say about this app, just know that I am actively working on six days a week.


In the context of building this app, I changed my focus from my past career in Microsoft SQL Server data warehouse development, including multi-dimensional cubes (SSAS). I am now working on what is called ‘full stack’ development using website technologies and NoSQL databases like documentDB. Currently, I’m working on the ‘back end’ database and soon I will be working on the ‘front end’. The front end retrieves the news stories from the back end. I don’t want to get weird here, but during this pivot in my skillset, I wanted to take advantage of what are called “AI Coding Assistants” that help developers with coding. I have been working with Cursor and the just released Claude Sonnet 3.7 everyday.


As I’ve been working with AI, I have gained a huge amount of experience. Note: I am not a 20-year old anymore … but for a 67-year old, I’m doing pretty good ;) So, with this technical foundation in view, let’s move on to the next part, because we need to breakdown how AI works. I will do my very best (yes—I hear Tom Hughes telling me to ‘bring it down a notch’). So, with all this said, we’re going to go on a journey to understand what AI is, how it works, why it works, and more. I hope you find this helpful.


Introduction to AI

What is AI? Artificial intelligence (AI) refers to machines and software that exhibit intelligence similar to human thinking. In simple terms, AI enables computers to simulate human cognitive abilities such as learning, problem solving, understanding language, and making decisions. For a deeper dive from IBM go here. Unlike regular programs that follow fixed instructions, AI systems can improve their performance by learning from data and experience. Increasingly, AI is self-improving.


For convention, you will always hear the term “artificial intelligence” as though it is intelligent, but I agree with Patrick Wood that it is NOT intelligent at all. In our opinion, it is indeed artificial but more like a mimic! Here is why: if the architecture of a neural network is predictive, i.e. predicting what the next logical word is AFTER the current word, then you can see that it uses probability to “talk”, not fluid text and speech like we do. Because of this, AI can’t intellectually “leap” like humans do. Or, for that matter, “create” like we do.


How AI works in general: Most modern AI is built on a foundation of machine learning. Machine learning means the AI system isn’t explicitly programmed for every scenario; instead, it learns patterns from data. For example, to teach an AI to recognize cats in images, we don’t give it a step-by-step cat-finding program. Instead, we show it many images of cats and non-cats. The AI model learns the visual patterns that define a “cat” and gets better at identifying cats over time. In short, AI learns like a student: through practice and feedback, rather than just following hard-coded rules.


What is a Neural Network?

One of the key techniques that enabled the current AI revolution is the neural network. A neural network is a type of machine learning model inspired by the human brain’s network of neurons. Think of it as a web of interconnected nodes (neurons) organized in layers. Each node is a simple computing unit that receives input, does a computation, and passes an output to the next layer. A neural network typically has an input layer, one or more hidden layers, and an output layer, all connected by numerous weighted links.


In a neural network, each connection has an associated number, called a weight, which indicates the importance of that connection. The design is loosely bio-inspired: just as neurons in our brain fire signals, the nodes in an artificial neural network activate based on inputs. When an input (say, a picture) is fed into the network, it passes through the layers. At each node, the input values are multiplied by the weights, summed up, and then an activation function decides whether that node “fires”. Major rabbit hole here. The output then becomes input to the next layer, and so on, until a final result is produced.


Why neural networks are useful: Neural networks excel at finding complex patterns in data. Because they have many layers of neurons, they are often called deep neural networks (deep learning) when the number of layers is large. They can learn to recognize images, understand speech, or predict outcomes by training on examples. For instance, a neural network can learn to recognize handwritten digits by training on thousands of labeled examples of digits. Over time, the network adjusts its internal weights (more on that in the next sections) to improve its accuracy. Neural networks are now used in computer vision, speech recognition, and many other AI applications because they are very good at capturing intricate patterns.


In essence, just as our brains’ neuron connections strengthen as we learn, a neural network “learns” by adjusting its connection strengths (weights) to improve at a task.


Example of how an AI model completes a sentence

Let’s see how the AI “thinks” about Psalm 23:4 “Even though I walk through the valley of the shadow of death, I fear no evil, for You are with me; Your rod and Your staff, they comfort me.” So, that’s what’s in our mind, but we only type in this prompt: "Though I walk through the..." and then hit the enter key, what do you think will happen? How does the model know how to respond? Let’s walk through an example.


Remember, the prompt we give the AI is: “Though I walk through the...”


Next Word Prediction:


The model evaluates possible next words and might predict "valley" as the most likely option.


Sentence becomes: "Though I walk through the valley"


Continuing the Process (Next Word Prediction):


With the new context, the model now predicts the next word. It might choose "of."


Sentence becomes: "Though I walk through the valley of"


Further Completion (Next Word Prediction):


The model, considering the context of literature and familiar phrases, might then choose "shadow."


Sentence becomes: "Though I walk through the valley of shadow"


Finalizing a Coherent Sentence (Next Word Prediction):


Continuing the pattern, it might add words to complete a familiar phrase:

"and" → "death," → "I" → "fear" → "no" → "evil."


Final Sentence:

"Though I walk through the valley of shadow and death, I fear no evil."


At each step, the model uses learned weights to calculate the probability of each candidate word, selecting the one that best fits the context to gradually build a coherent sentence.


Understanding Transformers in AI

Now, we’re going to build on this idea of predicting the next word as AI responds to a prompt. While neural networks have been around for decades, a new architecture called the Transformer has revolutionized AI in recent years. The Transformer is a deep learning model introduced by Google researchers in 2017 that is especially powerful for understanding language and sequences. Deep dive here. Unlike earlier neural network architectures that processed words one-by-one in order, transformers use an attention mechanism to consider all parts of the input at once. This means a Transformer can “look” at an entire sentence (or paragraph) and learn which words or elements are most important to each other in context.


The key innovation: attention. The Transformer is built on a concept called “self-attention.” Attention allows the model to weigh the importance of different words (or parts of the input) when producing an output. For example, if asked to translate a sentence or answer a question, a Transformer can focus on the relevant words in the input rather than treating all words equally.


This multi-head attention mechanism lets the model capture relationships like pronoun references or word meanings across a sentence. In simpler terms, if the Transformer were reading a book, attention is like highlighting the key sentences that are important for understanding the story.


Transformers have proven extremely effective for natural language processing tasks. They power many advanced language models (like GPT-based models). Because they can be trained on very large datasets, Transformers learn language patterns, grammar, and even some world knowledge. They are behind AI systems that can translate languages, summarize articles, answer questions, and generate text. And if you think about it, in my advanced Gospel initiatives to reach the lost during the Tribulation period, programming languages like Java Script is nothing more than another form of language. And this also explains why a LLM model can “speak” different languages like Hindi, Mandarin, Hebrew, etc. It’s just memorized enough information during training to figure out what should occur next, in multiple languages, concurrently if required.


Looking at entire sequences at once makes Transformers faster for training on huge texts compared to older sequential models. Today, Transformers are the foundation of state-of-the-art AI in language and are also being applied to images, audio, and other modalities.


Weights in AI: What They Are and Why They Matter

We briefly mentioned weights when discussing neural networks. Let’s explore this concept further, as weights are central to how AI models learn. In a neural network, each connection between nodes has a weight, which is basically a number. Weights determine how much influence one neuron’s output has on the next neuron. If a weight is larger, it means that connection is stronger – the input it carries is more important to the next layer’s calculation. If a weight is small (or zero), that connection contributes little or nothing. You can think of weights like volume knobs for each input: turning the knob up amplifies that input’s effect on the result, while turning it down silences it.


Weights are essentially where the “knowledge” of a trained AI model is stored. When we train an AI, what we’re really doing is adjusting the weights. Each weight adjustment nudges the network to give a desired output for given inputs. [NOTE: this is how the LLM development teams “shape” a response narrative]. Over many rounds of training, the weights settle into values that make the network perform well. In other words, learning in a neural network = changing its weights to improve accuracy. A well-tuned set of weights is what allows a neural network to recognize a cat in a photo or predict tomorrow’s weather.


Learning by adjusting weights: The process of training (covered in Part 2 of this series) will modify these weights gradually. If the network makes an error (for example, it thought an image was a dog when it was actually a cat), the training algorithm will adjust the weights in a way that the network is less likely to make that mistake next time. This might mean increasing some weights and decreasing others. Over time, these adjustments lead to the network “learning” the task. As an analogy, imagine a student learning to throw a basketball into a hoop. At first, they might throw too far or too short. With each try (and feedback about the miss), they adjust the force and angle of their throw. In a neural network, the weights are like that force and angle—adjusted little by little with feedback. After enough practice, the weights reach the right configuration so that the network consistently makes the correct prediction. And you can get repeatable results consistently.


Summary

Perhaps you can see where I’m going with this. So far, we have learned about how an AI works. Nowhere in this complex system is the type of “thinking” that a human is capable of. And yet, if we are lured by the narrative of Artificial Intelligence, or Artificial General Intelligence, and ultimately Artificial Super Intelligence, you can be misled. The difference between the three AI, AGI, and ASI is this: (a) the investment in highly performant chips, (b) investment in feeding tons of information into the model, and (c) reinforcement learning and tuning. If we don’t stay grounded, we can be frightened by what we see out there. Spoiler alert: don’t be. What we need to be very concerned about are the technocracy/elites/oligarchs and outright luciferian bloodlines that bend the training and tuning process for their own evil agendas.


As we head into Part 2 next week, I will be unpacking more background on AI technology because I want us to grasp the concepts and principles of what is actually going on with these technical achievements. In Part 3, we will have some final groundwork but it is Part 4 where we will fully explore the argument if AI can be possessed by one or more demons—or why not.


I do apologize in advance for the depth of this series. But as your Brother in Christ and a fellow Watchman for the Lord, I want to help pull people back from the edge of fear. By putting all this on the table, we will more readily see the deception narrative. Always, always see that we are close, the Lord will not allow the height of the Tower of Babylon go unchecked—this should be thrilling news!

bottom of page