Skip to main content

Architecture

The Randomness You Didn't Ask For: Understanding Non-Determinism in LLMs

The Randomness You Didn't Ask For: Understanding Non-Determinism in LLMs

·1704 words·8 mins
Non-determinism in LLMs creates real operational problems: flaky tests, irreproducible bugs, compliance nightmares, and unreliable agents. Most people only know about token sampling, but randomness creeps in across six distinct layers—from floating-point variance to hidden system prompts. Temperature=0 and random seeds help less than you’d hope because they constrain token selection, not reasoning paths. The solution requires structural constraints, not parameter tuning. Until then, you’re rolling dice in production.
Ask, And You Shall Receive: Making An Event Tracker

Ask, And You Shall Receive: Making An Event Tracker

·1713 words·9 mins
Using Claude Code, I built a full-stack application to replace Microsoft Access - without being a developer or understanding React, TypeScript, or Node.js. LLM-assisted coding enables rapid prototyping and bridges the gap between business requirements and technical implementation, but like flight simulators, it doesn’t make non-developers into developers. The parallel matters: functional prototypes aren’t production-ready systems, and knowing the difference requires actual expertise.
Fighting the Unfixable: The State of Prompt Injection Defense

Fighting the Unfixable: The State of Prompt Injection Defense

·2193 words·11 mins
Prompt injection is architecturally unfixable in current LLMs, but defense-in-depth works. Training-time defenses like Instruction Hierarchy, inference-time techniques like Spotlighting, and architectural isolation create practical systems. Microsoft’s LLMail-Inject showed thatadaptive attacks succeed at 32% against single defenses, 0% against layered approaches. Real failures like GitHub Actions compromise prove that securing obvious surfaces isn’t enough. Like SQL injection, it’s manageable with layering.
When Data Becomes Instructions: The LLM Security Problem Hiding In Plain Sight

When Data Becomes Instructions: The LLM Security Problem Hiding In Plain Sight

·2625 words·13 mins
LLMs fundamentally cannot distinguish between instructions and data. Whether you’re building RAG systems, connecting MCP servers to your data platform, or just using AI tools with sensitive information, every retrieved document is a potential instruction override. The Wall Street Journal just proved this by watching Claude lose over $1,000 running a vending machine after journalists convinced it to give everything away for free.