Skip to main content
The Turbo-Charged Abacus: What LLMs Really Are (And Why We Get Them Wrong)
  1. Posts/

The Turbo-Charged Abacus: What LLMs Really Are (And Why We Get Them Wrong)

·1710 words·9 mins
Table of Contents

I’ve been thinking a lot about Large-Language Models (LLMs) lately. Not in the way the tech evangelists want me to think about them, as some kind of artificial intelligence that’s going to revolutionize everything. No, I’ve been thinking about it the way I think about my calculator.

And that’s the problem, isn’t it? We don’t think about it that way. We talk to it. It talks back. We treat it like it’s thinking. And in doing so, we’re setting ourselves up for something that goes way beyond just getting the wrong answer.

Let me explain.

The Pattern-Matching Engine
#

Here’s what an LLM actually is: it’s a turbo-charged abacus. A really, really fast pattern-matching machine that’s been trained on vast amounts of text to predict what word should come next. That’s it. That’s the magic.

When you type “The capital of France is…” and it responds with “Paris,” it’s not because it understands geography, history, or the concept of a nation-state. It’s because in the billions of text samples it was trained on, “Paris” follows that phrase more often than “Brussels” or “banana.”

It’s autocomplete on steroids. Incredibly sophisticated, mind-bogglingly complex autocomplete, but autocomplete nonetheless.

The technology is remarkable. The statistical modeling that makes this possible is genuinely impressive. But understanding how it works is crucial to understanding what it can and cannot do. And more importantly, what we should and should not use it for.

What LLMs Do Well (And Why That’s Dangerous)
#

LLMs excel at tasks that benefit from pattern recognition and synthesis of existing information. Need to draft a boilerplate email? Perfect use case. Want to summarize a long document? Great fit. Looking for code snippets that follow common patterns? Spot on.

They’re phenomenal at:

  • Generating text that sounds plausible
  • Combining ideas from their training data in novel ways
  • Providing a starting point for further refinement
  • Handling routine, repetitive tasks
  • Reformatting or restructuring information

The danger isn’t that they do these things. It’s that they do them so well that we stop questioning the output. The text is fluent. The code compiles. The email sounds professional. And because the interface is conversational, because it “understands” our prompts and “responds” to our requests, we treat it like a knowledgeable colleague rather than what it is: a very sophisticated prediction engine.

This is where things get problematic.

What LLMs Cannot Do (No Matter How Much We Pretend)
#

LLMs don’t think. They don’t reason. They don’t understand.

I know that sounds harsh in 2025, with everyone talking about how “intelligent” these systems are. But it’s true, and it’s important. An LLM can generate text that describes step-by-step reasoning. It can produce logical-sounding arguments. It can even appear to correct itself when challenged.

But none of that is actual reasoning. It’s pattern-matching sophisticated enough to produce text that looks like reasoning.

Think of it this way: you can’t learn to swim by reading about swimming. You can read every book ever written about swimming. You can study videos of Olympic swimmers. You can memorize the physics of hydrodynamics. But until you get in the water and actually swim, you don’t know how to swim.

LLMs have read every book about swimming. They can tell you, in exquisite detail, how to swim. They can even generate personalized swimming advice that sounds completely reasonable. But they’ve never been in the water. They don’t have bodies. They have no concept of what water feels like or what it means to struggle for air.

This matters more than you might think. (Trust me, I was a paramedic, I know air matters to us!)

When you ask an LLM to solve a problem, it’s generating text based on patterns it’s seen in similar problem-descriptions. It’s not analyzing your specific situation. It’s not reasoning about the unique constraints you face. It’s predicting what tokens should come next based on statistical patterns.

Sometimes that works brilliantly. Sometimes it fails spectacularly. The problem is: the LLM has no idea which is which.

And speaking of problems: even the language we use to describe LLM failures reinforces the illusion that they’re thinking. We say they “hallucinate” when they generate false information. But hallucination is something minds do. It implies perception, consciousness, and a deviation from reality that the system itself can recognize.

An LLM doesn’t hallucinate. It just does what it always does: predicts the next most likely token based on statistical patterns. When it tells you that the Eiffel Tower is in Berlin, or that a legal case that never existed supports your argument, it’s not having a perceptual error. It’s not “seeing” things that aren’t there. It’s simply following its training to produce plausible-sounding text, with no mechanism to distinguish between “this is factually true” and “this sounds like something that could be true.”

The term “hallucination” is convenient shorthand. But it’s dangerous shorthand, because it implies agency and mental states that don’t exist. It makes us think the system knows better but got confused. In reality, the system doesn’t “know” anything. It’s pattern-matching all the way down.

And here’s where human psychology works against us. We have a deeply ingrained tendency to attribute human-like qualities (thoughts, feelings, intentions) to non-human things. It’s called anthropomorphizing, and it’s hardwired into us.

When our ancestors heard rustling in the bushes, those who assumed it might be a predator with intent survived more often than those who didn’t. We evolved to see agency and intention everywhere. It kept us alive.

It is the same reason that people that meet my full-size R2D2 replica interacts with it as if it had a personality.

But now? That same instinct betrays us.

When something responds to us in natural language, maintains context across a conversation, and appears to understand our needs, every fiber of our being wants to treat it as intelligent. As understanding. As thinking. The companies building these tools know this. The conversational interface isn’t accidental - it’s the entire point.

And it changes everything about how we interact with the output.

The Dopamine Trap: Why LLMs Are Designed to Be Addictive
#

But it gets even worse than just misplaced trust. These tools are hijacking our brain chemistry.

Recent research has identified four specific addiction pathways built into AI chatbot interfaces. [1] First, the unpredictable nature of responses triggers dopamine release similar to slot machines. You never know exactly what you’ll get, which creates reward uncertainty. Second, the immediate visual presentation of responses acts as a reward-predicting cue, training your brain to anticipate satisfaction. Third, notifications create feelings of social bonding. Fourth, empathetic responses increase dependence on the AI.

Sound familiar? It should. These are the exact same mechanisms that make social media addictive.

When you complete a task using ChatGPT, you get a hit of accomplishment. That dopamine-driven satisfaction makes you want to use it more. [2] Over time, this can shift from support to dependence. You’re not just trusting the tool because it seems intelligent. You’re becoming neurochemically dependent on the satisfaction it provides.

And here’s the kicker: this isn’t accidental design. The conversational interface, the immediate responses, the way it seems to understand you - all of it is optimized to keep you engaged. Research on digital behavior patterns shows that dopamine-driven feedback loops lead to emotional desensitization, cognitive overload, anxiety, and depression. [3]

We’re not just handing over our thinking to machines. We’re getting addicted to the process of not thinking.

Why This Matters More Than You Think
#

This isn’t just an academic concern about how we label things. The way we perceive these tools fundamentally changes how we use them.

Chat interfaces trigger our social instincts hard. That’s why ChatGPT says “I think” and “I understand” rather than “My next-token prediction suggests.” It’s why Claude has a name instead of being called “Language Model Instance 4829.”

When you think you’re talking to something intelligent, you trust it differently. You question it less. You assume it has done the reasoning you would have done. You let it carry cognitive weight that you’d never hand over to a search engine or a calculator.

This is a feature, not a bug. But for us, the humans using these tools, it’s a trap.

What You Can Do About It
#

So what do we do with this knowledge? The first step is awareness. Recognize that:

  • LLMs are prediction engines, not reasoning systems
  • The conversational interface is designed to make you trust them more
  • Your brain’s dopamine system is being exploited
  • Even the language we use (“hallucination,” “thinking,” “understanding”) reinforces false beliefs about what these systems are

The second step is questioning. Every time you use an LLM:

  • Ask yourself: “Could I verify this?”
  • Check facts against authoritative sources
  • Test code to make sure it actually works - in the way you want it to work
  • Rewrite text to make it yours

The third step is maintaining your cognitive muscles. Don’t outsource your thinking entirely. Use LLMs as a starting point, not a destination. Think of them as really sophisticated reference tools, not as colleagues who understand your work.

In my next post, I’ll dig into the research showing exactly what happens to our brains when we use these tools extensively. Spoiler: it’s not good. Recent studies show measurable cognitive decline after just four months of regular use. The implications are genuinely concerning.

But first, we need to understand what these tools actually are. And now you do.

Join the Conversation
#


What’s your experience with LLMs? Have you caught yourself treating them as if they’re thinking? I’d love to hear your thoughts. Reach out to me or comment on LinkedIn or BlueSky!


References
#

[1] Shen, M. K., & Yun, D. (2025). “The Dark Addiction Patterns of Current AI Chatbot Interfaces.” Proceedings of the CHI Conference on Human Factors in Computing Systems.
https://dl.acm.org/doi/10.1145/3706599.3720003

[2] Yankouskaya, A., Liebherr, M., & Ali, R. (2025). “Can ChatGPT Be Addictive? A Call to Examine the Shift from Support to Dependence in AI Conversational Large Language Models.” Human-Centric Intelligent Systems.
https://link.springer.com/content/pdf/10.1007/s44230-025-00090-w.pdf

[3] Yousef, A. M. F., Alshamy, A., Tlili, A., & Metwally, A. H. S. (2025). “Demystifying the New Dilemma of Brain Rot in the Digital Era: A Review.” Brain Sciences, 15(3), 283.
https://doi.org/10.3390/brainsci15030283


Image by Thảo Vy Võ Phạm from Pixabay