top of page

Can AI Be Possessed By Demons? (3 of 4)

Apr 4, 2025

Can AI Be Possessed By Demons? (3 of 4) by Scott Townsend


This article can be viewed in its entirety HERE.


Behind the Scenes: How AI Systems Interact with Users

When you chat with an AI, such as asking questions to a chatbot or a virtual assistant, there's more happening than meets the eye. You see your question and the AI's reply, but behind the scenes, there's a hidden set of instructions guiding how the AI interacts with you. Understanding these hidden instructions can help you better grasp how AI conversations work and how critical it is to “talk” to an AI correctly in order to achieve the desired outcome. These hidden set of instructions are called System prompts. When you interact with AI, you will almost never see these.


However, because I do software development, I use hand crafted system prompts to give the AI more context for the request I have. Remind me to talk more about this, because it’s quite interesting actually. But not now…LOL. I have an example where I extract earthquake metadata from news stories from around the world. The system prompt I use provides the instructions, rules, and output requirements I need to have before storing it in Document DB (a no-SQL database product on AWS).


User Prompts and System Prompts: What's the Difference?

Let's clearly differentiate two important terms:


User Prompt: This is simply what you type or say when interacting with an AI chatbot or website. For example, if you ask, "Can you explain why the sky is blue?" that's your user prompt. BTW, this is a very common test prompt. You could be forgiven for interchanging the word “question” for “prompt”. But as it turns out, the prompt is the right word to use for an AI context.


System Prompt: This is a hidden set of instructions the AI receives before your interaction begins. Think of the system prompt as the AI's job description—it tells the AI what its role is, how to behave, and what tone to use. You usually never see these instructions, but they significantly influence the AI's responses. Don’t be alarmed by a behind the scenes, or secret prompts. The system prompt is a set of instructions that provides behavior and thinking help to the AI before it processes the user prompt.


Why Do We Need System Prompts?

System prompts ensure that the AI consistently behaves appropriately and predictably. They establish the AI’s persona, guiding the AI to respond in a specific style or to follow certain guidelines. For instance, system prompts can instruct the AI to:


Maintain politeness and professionalism.


Use simple language suitable for children.


Avoid giving medical or legal advice.


Refuse inappropriate or dangerous requests.


Think of a system prompt like a set of guidelines for a customer support representative: no matter the question, the representative must stay professional, polite, and helpful.


How System and User Prompts Work Together

Let's break down a practical example clearly to illustrate this:


System Prompt (Hidden Instruction):


"You are a helpful and friendly science tutor. Always explain things clearly and simply, encourage questions, and never give abrupt yes/no answers."


User Prompt (Your Input): "Why is the sky blue?"


AI's Response: "Great question! The sky looks blue because sunlight, which is actually a mix of colors, interacts with our atmosphere. The blue color in sunlight gets scattered around more than other colors when it hits air particles. So, wherever you look in the sky, you mostly see scattered blue light. It's like how water sprays from a sprinkler in every direction—blue light scatters all around the sky more than other colors, making the sky appear blue. Hope that helps!"


Notice how the response directly matches the hidden system prompt: friendly, detailed, clear, and educational.


The importance of clear prompts

To illustrate the importance of the prompt, let’s take a moment and compare and contrast a couple scenarios.


Example 1: Poor Prompt, Unclear Context

Prompt:


“I need a dinner recipe for my kid.”


AI’s Response (Hypothetical):


“Here’s a quick stir-fry recipe featuring peanut sauce, vegetables, and chicken…”


Analysis:


The AI doesn’t know your child has a severe peanut allergy—it wasn’t mentioned in the prompt.


You receive a recipe that could be dangerous or useless for your needs.


The model simply picked a common, highly rated dish from its “mental library,” but it lacks crucial information (no context about allergies or dietary restrictions).


Example 2: Good Prompt, Proper Context

Prompt:


“My child has a severe peanut allergy. I need a 15-minute dinner recipe with chicken and vegetables, but no peanut products whatsoever. He also dislikes onions. Please suggest some variations.”


AI’s Response (Hypothetical):


“How about a quick chicken and bell pepper stir-fry? Use olive oil, garlic, and a mild chili sauce instead of peanut sauce. For extra flavor, add sliced carrots or zucchini—no onions included. You can also substitute chili sauce with BBQ sauce if you prefer a sweeter taste…”


Analysis:


By including specifics (peanut allergy, time limit, taste preferences), the AI filters out irrelevant or unsafe options.


The model provides multiple alternatives and clarifies how to adapt the recipe.


Why “Bad AI” Might Just Be “Bad Prompts”

Lack of Context: If you don’t tell the AI important details (e.g., allergies, time constraints, style of cuisine), it can only guess based on general data. Did you notice this is what we just witnessed in the example above?


Vague Instructions: Saying “I need a recipe” is too broad. The AI will offer a generic response, which might not fit your real needs.


Insufficient Constraints: Without stating your constraints (e.g., no peanuts, 15-minute max), the AI won’t know what solutions are actually viable.


When the AI seems to behave “badly” or delivers an unwanted answer, it’s often because it was never given the right details to begin with. This is why crafting a good prompt—with clear, relevant information—makes a huge difference in the quality of AI-generated outcomes. I can guarantee you that the DOGE team working for Elon Musk are experts at crafting both system and user prompts as they root out waste, fraud, and abuse.


Adjusting the System Prompt: Human-in-the-Loop

Sometimes, people supervising the AI (the platform developers or safety team moderators) adjust the hidden instructions based on user feedback. For example, if users feel the AI sounds too formal or robotic, the developers might tweak the instructions to make the AI sound warmer and friendlier. This ongoing human guidance helps AI interactions feel more natural and improves user experiences over time. NOTE: One of the OpenAI claims about their GPT 4.5 model is that it is more human-like as interacts with users. I can confirm it is a big step forward from GPT 4.0 where it always seemed stiff and artificial.


Pro Tip (and don’t laugh) — if you want to know if something is written by an AI foundation model, look for uniform sentence and paragraph lengths, no swearing, no misspelled words, and perfect grammar!


AI Parameters: The Knobs Inside the Machine

When reading about AI, you've probably encountered the term "parameters," often mentioned with impressive numbers like billions or trillions. But what exactly are these parameters?


Think of parameters like tiny knobs and switches inside a complex machine. Every AI model has thousands, millions, or even billions of these internal settings. During training, these knobs are adjusted to help the AI understand and respond correctly to inputs.


Here's a simpler analogy: imagine adjusting the equalizer settings on your stereo system to get perfect sound—bass, treble, balance. Each parameter is like one of these knobs. During training, the AI fine-tunes all these knobs, learning how best to handle the inputs (like your voice commands or written questions).


Why Do Parameter Numbers Matter? More parameters typically allow an AI model to learn more detailed information. Think of it like pixels in a photo—the more pixels, the clearer and richer the picture. Similarly, more parameters can help an AI recognize subtle differences and nuances. However, having many parameters requires more computational power and memory, just like higher-resolution images require more storage space and processing power. Yep, some of my astrophotography pictures are over 800MB for a single image! You can find these on my Facebook (here).


Is Bigger Always Better? Not necessarily. While large numbers of parameters can enable detailed learning, too many parameters without sufficient training can be wasteful and ineffective. It's like having a huge library—if you never organize the books properly, you'll struggle to find what you need. Smaller, efficiently trained models can often perform just as effectively for many everyday tasks.


Parameter Count vs. Speed. One of the big tradeoffs involved before deciding on an AL model with more or less parameters is the raw output performance—often expressed as tokens (see below for definition) per second. As long as the quality of the response is good, having those good responses in less time is always beneficial and more efficient. Often, the smaller models are much faster. And that makes sense, because the larger LLMs have a lot more room for more complex thinking. Smaller ones don’t, so they are generally much faster to respond.


Tokens and Vectors: How AI Understands Information

Two other common but confusing terms you'll often see are "tokens" and "vectors." Let’s break them down:


Tokens: When AI processes text, it doesn't read the way humans do. It first breaks sentences down into smaller pieces called tokens. Tokens can be words or parts of words. For example, "Understanding AI" might become two tokens: ["Understanding", "AI"].


Why Tokens? AI models handle information easier in smaller, predictable chunks. Tokens help standardize how language is fed into the model. Too big, and important details might be lost; too small, and the model can get overwhelmed by too much detail.


Vectors: Once text is broken into tokens, the AI converts each token into a numeric form—vectors. A vector is essentially a list of numbers that represent the token. For example, the word "happy" might be represented by a list of 300 numbers. Hard to believe this is legit, but it’s true.


Why Vectors? AI models use mathematics to understand patterns. Vectors turn words into numbers that models can easily process. Words with similar meanings (like "happy" and "joyful") have vectors that are close to each other in numerical space, helping the AI to grasp meanings and context. Remember when we talked about neural networks in Part 1? That network diagram showed the pathways of these tokens / vectors.


AI Data Centers: The Physical Side of AI

Behind every smart AI tool, like a virtual assistant or an image recognition app—there's an advanced data center designed to handle the intense computational needs of AI. The announcement of the new “Star Gate” data center project in Texas is (here). You can see what an aerial view of a data center looks like from Sam Altman (here). Here’s what makes AI data centers so expensive and complex:


Specialized Hardware: AI tasks need massive computing power. Traditional servers are designed differently from AI-focused servers, which often use GPUs (graphics processing units) because they are excellent at performing multiple calculations simultaneously. Here’s information on NVIDIAs DGX H200 chip (here).


Power and Cooling: Because AI servers perform intense computations, they consume a lot of power and generate substantial heat. Specialized cooling systems, including liquid cooling, are used to keep them operating safely and efficiently.


Networking: AI systems typically require fast, reliable communication between many servers. They often use advanced networking equipment to quickly share large amounts of data during training.


Data Management: AI applications need quick access to massive datasets. Special storage solutions ensure data is rapidly available to keep the AI system efficiently processing information.


Summary

Understanding how AI is structured—from hidden instructions to parameters, tokens, vectors, and specialized hardware—helps demystify the technology. It gives insight into why AI behaves in certain ways. We will be looking at what’s going on with claims that demons possess AI models in next week’s post.

bottom of page