Skip to main content

AI

When Data Becomes Instructions: The LLM Security Problem Hiding In Plain Sight

When Data Becomes Instructions: The LLM Security Problem Hiding In Plain Sight

·2625 words·13 mins
LLMs fundamentally cannot distinguish between instructions and data. Whether you’re building RAG systems, connecting MCP servers to your data platform, or just using AI tools with sensitive information, every retrieved document is a potential instruction override. The Wall Street Journal just proved this by watching Claude lose over $1,000 running a vending machine after journalists convinced it to give everything away for free.
The Amateur Orchestra, Part 2: How to Make Music Instead of Noise

The Amateur Orchestra, Part 2: How to Make Music Instead of Noise

·2211 words·11 mins
Knowing what’s broken is easy - fixing it requires understanding your domain, imagination to form hypotheses, and courage to act. Most data initiatives fail not because the analysis was wrong, but because nobody owned the outcome or knew what to do next. Reports aren’t neutral information: they’re persuasion. Before collecting data, ask: what decision does this inform? Use ‘data contracts’ to enforce discipline. The Portsmouth Sinfonia had instruments but couldn’t make music. You have data. Can you drive decisions?
The Amateur Orchestra, Part 1: Why Most Data Initiatives Fail

The Amateur Orchestra, Part 1: Why Most Data Initiatives Fail

·2115 words·10 mins
The famous beer-and-diapers data mining story? Never happened. Most ‘data-driven’ companies are just data-decorated, exploring dashboards without hypotheses or action plans. Netflix, UPS, and Capital One succeeded because they started with clear hypotheses about what drives outcomes, then collected data to test them. You don’t explore a violin to see what noises it makes - you decide what piece to play. Are you playing instruments or making music?
One Foot In Front The Other: How LLMs Work

One Foot In Front The Other: How LLMs Work

·1786 words·9 mins
You think ChatGPT is ’thinking’? It’s rolling dice, one token at a time. LLMs don’t plan, reason, or understand: they sample from probability distributions based on statistical patterns. Worse, if you’re working in Swedish, Arabic, or most non-English languages, you’re getting a fundamentally degraded product due to tokenization bias. And as these models increasingly train on their own outputs, they’re collapsing into irreversible mediocrity. Understanding what’s actually happening changes everything.
The Cognitive Cost: What Using AI Is Actually Doing To Our Brains

The Cognitive Cost: What Using AI Is Actually Doing To Our Brains

·2156 words·11 mins
Research shows measurable cognitive decline after just four months of LLM use. Like GPS destroyed our spatial navigation abilities, AI is atrophying our thinking. Here’s what the science reveals, the warning signs you’re in too deep, why organizations should be terrified, and what we can do about it.
The Turbo-Charged Abacus: What LLMs Really Are (And Why We Get Them Wrong)

The Turbo-Charged Abacus: What LLMs Really Are (And Why We Get Them Wrong)

·1710 words·9 mins
LLMs are sophisticated pattern-matching engines, not thinking machines. Our hardwired tendency to anthropomorphize combined with dopamine-driven addiction pathways is changing how we interact with these tools. Understanding what they actually are is the first step to using them wisely.
Waiting for the Tooth Fairy - Sugar, AI, and Why We Keep Making the Same Mistakes

Waiting for the Tooth Fairy - Sugar, AI, and Why We Keep Making the Same Mistakes

·1553 words·8 mins
Sugar went from luxury to ubiquitous poison before we understood what it was doing to us. We’re doing the exact same thing with AI, adding it to everything without genuine use cases, while it erodes literacy and critical thinking. By the time we realize what we’ve lost, the infrastructure will already be built.