Skip to main content

Security

The Randomness You Didn't Ask For: Understanding Non-Determinism in LLMs

The Randomness You Didn't Ask For: Understanding Non-Determinism in LLMs

·1704 words·8 mins
Non-determinism in LLMs creates real operational problems: flaky tests, irreproducible bugs, compliance nightmares, and unreliable agents. Most people only know about token sampling, but randomness creeps in across six distinct layers—from floating-point variance to hidden system prompts. Temperature=0 and random seeds help less than you’d hope because they constrain token selection, not reasoning paths. The solution requires structural constraints, not parameter tuning. Until then, you’re rolling dice in production.
Fighting the Unfixable: The State of Prompt Injection Defense

Fighting the Unfixable: The State of Prompt Injection Defense

·2193 words·11 mins
Prompt injection is architecturally unfixable in current LLMs, but defense-in-depth works. Training-time defenses like Instruction Hierarchy, inference-time techniques like Spotlighting, and architectural isolation create practical systems. Microsoft’s LLMail-Inject showed thatadaptive attacks succeed at 32% against single defenses, 0% against layered approaches. Real failures like GitHub Actions compromise prove that securing obvious surfaces isn’t enough. Like SQL injection, it’s manageable with layering.
When Data Becomes Instructions: The LLM Security Problem Hiding In Plain Sight

When Data Becomes Instructions: The LLM Security Problem Hiding In Plain Sight

·2625 words·13 mins
LLMs fundamentally cannot distinguish between instructions and data. Whether you’re building RAG systems, connecting MCP servers to your data platform, or just using AI tools with sensitive information, every retrieved document is a potential instruction override. The Wall Street Journal just proved this by watching Claude lose over $1,000 running a vending machine after journalists convinced it to give everything away for free.