

Apr 3, 2025
Can AI Be Possessed By Demons? (2 of 4) by Scott Townsend
This article can be viewed in its entirety HERE.
Demystifying AI: Understanding How It Learns and Works
You've probably heard a lot about AI lately—stories ranging from impressive breakthroughs to scary warnings. But let's pause and clearly understand what AI really is, how it learns, and why it works the way it does. My goal here is to cut through confusion and bring clarity to help provide better discernment about the stunning pace we see today in all the news.
How AI Models Are Trained
Training an AI model is a lot like teaching a student or training a pet—through practice, repetition, and gentle corrections. Here's how it works, step by step:
Gathering Examples: First, they collect a lot of examples related to what they want the AI to do. If they’re teaching an AI to recognize handwritten numbers, researchers show it many images of handwritten numbers labeled clearly ("this one is a 5, this one is a 2"). It's similar to giving a student plenty of practice exercises before a test.
Making Guesses: At the beginning, the AI doesn't know anything yet, so its first guesses are mostly random. It's like a beginner playing darts with a blindfold on—at first, it rarely hits the target. Over time, however, these guesses become more educated as it learns. If you recall from last week’s post, these LLMs are using probability and weights that help direct the response. By giving it feedback on the handwritten numbers, the AI can dynamically alter the weights so that its next guess is more accurate. There are built-in “rewards” given to the AI for correct behavior, often in points. Obviously, the AI is incentivized to score the highest points possible (aka make the best responses).
Checking Mistakes: After each guess, they check how accurate the AI responded. If the AI guessed an image was a "3" but it was actually a "5", that's a mistake. Just like a student who learns from reviewing incorrect test answers, the AI learns from these mistakes. Now, it is more common that AI's are beginning to validate other AI’s as they are tested. So, there is a test bank of questions and answers. A question is asked and AI responds. Then another AI evaluates the response and determines if it is correct. Humans are increasingly out of the loop so to speak. Yes—this is the scary part. It’s called unsupervised learning.
Adjusting and Improving: Getting back to the incorrect number guess, the AI learns from this feedback. It adjusts itself (think of it as tweaking tiny knobs inside its neural weights) so it can do better the next time it sees something similar. It's very similar to how we adjust the hot and cold controls in a shower until we get the perfect temperature. Each adjustment brings it closer to consistently correct answers.
Repeating the Process: The AI repeats this process many, many times, learning and adjusting after every attempt. Eventually, the AI gets better and better at recognizing patterns—just like how we improve at riding a bike or cooking a new recipe through practice. After extensive practice, the AI can reliably recognize new images it hasn't seen before. This is more about brute force pattern matching in the data than it is intelligence. More on this aspect in Part 4 in a couple more weeks.
The training process can require lots of data and powerful computers, but once done, the AI becomes quick and efficient at the task it learned. It's similar to how professional athletes train extensively to perform effortlessly during a competition. What training and muscle memory are for an elite athlete, the training systems and feedback are to the AI. Everything has to be right before a model is approved for release to early release/early preview users, and then eventually to the general public. I have access to these early release models and can generally see significant increase is what I will characterize as “competency” from one model to the next.
Different Ways AI Can Learn
Not all AI learns in the same way. You may have heard these terms below and noticed that we are moving away from strictly supervised learning to unsupervised and reinforcement learning. Whenever there are serious risks of getting an answer wrong, such as when an AI model “hallucinates” here, it means humans still have to be in the training loop. Here are the main types of learning:
Supervised Learning: Like learning with flashcards or a teacher. The AI has clear examples with correct answers and learns by practicing over and over. This method is excellent for tasks like recognizing faces in photos or filtering spam emails.
Unsupervised Learning: Imagine sorting photos into categories without knowing what the categories are ahead of time. The AI explores data on its own, looking for patterns without guidance. It might identify common features like colors, shapes, or themes and group similar items together automatically.
Reinforcement Learning: Similar to learning a video game. The AI tries different moves, receiving rewards for good choices and penalties for mistakes. Over time, it gets smarter and makes better decisions. This type of learning has been used successfully in teaching AI to play complex games like chess and Go, where it surpasses human performance. Given enough time, these specialist AI models are quite impressive.
Human-in-the-Loop Learning: Sometimes AI needs extra help. Humans guide AI by correcting mistakes or answering its questions, making sure it learns properly. It's like having a mentor or experienced guide available during training. As I mentioned above, this approach is crucial when AI makes decisions affecting human lives, like medical diagnosis or safety protocols regarding drug interaction a research university investigating the next generations of pharmaceuticals.
So, each of these training methods is useful for different situations, and often AI developers combine these methods for the best results. The combination ensures flexibility and accuracy in the wide range of applications AI has today, from maps featuring driving assistance to personalized recommendations on streaming platforms.
NOTE: At some point in the future, Tribulation Saints will be confronted by technocracies, governments, or corporations, to explicitly harm others. That is sobering to say the least. For now, the Holy Spirit has checked this outcome. He is the RESTRAINER during the Church Age (and up to the end of this dispensation). But when the Lord determines the time is right, He will step aside to let mankind’s full depravity, sin, evil, lustful and deadly murderous actions take there full course. This is why we must try hard to help people understand the urgency of coming to faith in Jesus NOW as opposed to after the Rapture. You might have heard a Pastor say something like: “If you think coming to Christ now is hard, what makes you think you’ll be able to come to Christ later after the Rapture?” The point is to give them the warning and leave it to the Lord. Many will come to Him at the right time. Here’s an excellent article from our friends at Got Questions (here).
AI in Action: What Happens After Training
After training, the AI is ready for real-world tasks—like software programming, recognizing photos, understanding spoken commands, or helping with everyday tasks. If you’ve heard the term “AGI” Artificial General Intelligence, this is what their aim is: a generalist AI (aka a foundation model) that is able to do just about anything asked of it. I encourage you to go (here) to see the announcement and video regarding a new model with improved deep thinking.
The newer models are being tasked to solve some of our most difficult challenges. For example, there is the Medical Doctor in the UK that spent 10 years in research to figure out the cause and treatment for a super bug (here). The AI was given all the research and it took only 2 days for the results to come out. The research doctor confirmed it was the same correct result. Thus, we are likely to continue seeing major advancements accelerate even more.
When AI is being used in real-life (we call this "inference"), it has to quickly apply everything it learned. Think of it as taking an exam—training was studying, and inference is answering the questions correctly and promptly. For instance, when you ask your phone's voice assistant a question, it uses its training to quickly understand and respond.
NOTE: I will continue updating you on what to look out for so we are smart and can inform others of some best practices, warnings, and such. AI intrusion is being wired into our smart devices, so I want you to have the information you need to make these important decisions and tradeoffs.
It's crucial that AI works quickly and efficiently in real-world applications—especially in things like self-driving cars, medical diagnostics, or voice assistants—so developers constantly optimize AI to be fast and accurate. Efficiency matters because we depend on these systems for safety, convenience, and effectiveness in our daily lives. The other less stated reason for efficiency is because of the power (from the electrical grid) required to run all the data centers and servers involved…enough to run a small city. Contrast that to the superiority of the brain the Lord created, all we need to do to have incredible thinking ability is to eat a twinkie ;) Not millions of watts of power like computers need, but a hundred calories or so.
Teaching AI to "Reason" Better
One big step forward in AI is teaching it to reason clearly. Right now, many AI systems recognize patterns well but sometimes struggle with logical reasoning.
Imagine solving a puzzle or working through a tricky math problem—most of us break it down into simpler steps. Researchers are teaching AI to do the same. They encourage AI to "show its work" step by step, improving accuracy and reliability. If you use AI for work, or have played around with the more powerful models, you can actually spot the model “thinking” about the best way to understand and then answer your prompt (aka question). It is VERY INTERESTING to read the models reasoning because the benefit is that you can notice if the reasoning is wrong or off track. Then, you can correct it through a better prompt next time until you get a solid result. In a strange way, YOU actually becomes the human in the middle! It’s kind of mind blowing from a certain point of view.
Improving AI reasoning means the AI doesn't just guess—it carefully thinks through problems, makes logical connections, and provides explanations. Increasingly, the primary “foundation model” will delegate a question or part of a question to a smaller more specialized model for resolution. This process is known as MoE “Mixture of Experts” and has allowed an otherwise generalized LLM to make big advances in more narrow fields. And of course, all the marketing hype will tell you it makes AI systems more helpful and trustworthy in everyday use, from answering complicated questions, like advances in cancer research or solving for fusion energy…the supposed unlimited clean energy (here).
Incorporating reasoning skills also helps AI avoid mistakes and misunderstandings, making it safer and more reliable, especially in sensitive applications like healthcare, finance, and legal advice. It doesn’t take a lot of effort to find that AI does create misunderstandings and introduces errors on purpose to deceive. See these articles here, and here.
Summary
We have been laying some groundwork here for next week’s post that will cover topics such as how to “talk” to an AI using a “prompt”. Plus, there is a lot of cryptic numbers after most LLM model names. The OpenAI o1 model is thought to have 300 billion parameters. That’s something we’ll talk more about next week. These numbers matter, and roughly correlate to the sophistication of the interaction between an AI and a human. Finally, xAI’s Grok 3 beta models (here) are making lots of headlines today. Yes, this is the same “X” (formerly Twitter) that Elon Musk owns. You know, that guy that wears a Baphomet and inverted cross on his Halloween costume (here) a couple of years ago. There is an interesting article on how well Grok 3 is doing against some of the other leaders in the field (here).
There is a lot of smoke and mirrors trying to beckon users to the latest AI offering, but make no mistake, the Beast System is rising. I’m not alone in my thoughts here. I think many believe that the news around waste, fraud and abuse is a cover/distraction for AI infrastructure being systematically installed/wired into our government systems…including the military. If that’s not a sign of the times, I don’t know what is.
As I covered in my recent interview with Tom Hughes of Hope for Our Times coming out today (sorry, I don’t have the link yet), we are literally seeing the tower of Babel reaching new heights as mankind tries to completely usurp God and “become like gods”. Note the small “g” there. The transhumanist movement is getting the technical breakthroughs that will enable the dreams of the elites that yearn to break free of God and His authority. Won’t they get the shock of their lives at the Great White Throne when they have to bend their knee to Jesus and confess that He is LORD!