Bright
Warm
Auto Bedtime

AI Needs an Intent Layer

AI coding tools go straight from prompt to code. That’s the problem.

When there’s no intermediate representation of what you’re building, the code itself becomes the domain model. The AI reads your codebase, treats it as truth, and bases its next decisions on it. But code is incomplete. It’s full of implementation noise — workarounds, arbitrary choices, half-finished ideas. Treat that as your source of truth and every subsequent change inherits the noise. Over time, signal degrades. The system drifts from what you actually meant.

There’s a layer missing between the conversation and the code. Call it an intent layer — a representation of the system made up of assertions and truths. Not code, not documentation, not a prompt log. A model of what the system is and why, expressed in terms of the problem domain.

Diagram comparing prompt-to-code flow (where code becomes the source of truth) versus prompt-to-intent-to-code flow (where the intent layer anchors decisions)

Code isn’t truth

Without an intent layer, you can prompt the same thing twice and get different code each time. The AI has no anchoring model — it’s just pattern-matching against whatever’s in the codebase right now. Every decision is local.

With an intent layer, the AI has something stable to work from. The assertions might be incomplete — it’s genuinely hard to define a system completely — but they’re correct. Code becomes a reflection of the intent, not the other way around. And when the intent changes, the code can be recalculated from it rather than patched on top of itself.

This is the difference between a codebase that gets clearer over time and one that gets muddier.

This isn’t a new idea for developers

Good developers already think this way. Every problem has a natural language — a set of concepts, relationships, and rules that exist before any code does. Understanding that language is what separates someone who builds the right system from someone who builds a system.

Early in my career I noticed that the developers who wrote the best code weren’t the ones with the deepest syntax knowledge. They were the ones who could articulate the domain clearly enough that the code almost wrote itself. I stuck with Ruby for years partly because of this — it let me represent how I conceived of the problem without constantly translating through layers of syntax. The gap between my mental model and the code was small.

Most people don’t work this way. They map problems process by process — if x, do y — rather than modelling the full system. That gives them one dimension, one view. They miss how the relationships play out across the whole thing.

What AI should do with this

The intent layer isn’t something the developer should have to write and maintain like documentation. The AI should build it. It should read between the lines of prompts, ask clarifying questions, and accumulate a model of the system being described.

A developer might be talking about one specific endpoint, but they’re also implicitly telling the AI about the system’s structure, its invariants, its boundaries. The AI should be working across three layers simultaneously — human input, intent model, code — and mapping between them. What the human says gets interpreted through the intent layer before it touches the code.

Nothing I’m aware of does this today. CLAUDE.md files, cursor rules, context files — they’re useful, but they’re not capturing the language of the problem you’re trying to solve. They’re configuration, not comprehension. All the current approaches want to jump straight to code.

Ralph loops are the closest thing I’ve seen — autonomous AI loops that iterate through a product requirements document (PRD) until everything’s checked off. The direction is broadly right, but it misses two things. First, it still expects the human to write the PRD upfront. People don’t naturally work that way. They express intent over time, in fragments, in conversation, in half-formed thoughts — and the AI should be able to pull out the intent accordingly rather than demanding a spec before it starts. Second, Ralph loops frame the problem as the system iteratively learning to meet fixed requirements. But that’s not how building software actually works. It’s not the machine converging on a static target. It’s the human and the machine working symbiotically over time to shape the solution and define the problem space together.

What I’m describing is almost the inverse. As a developer works through prompts to build a system, the AI should be building a shadow PRD — not one the developer has to write, but one the AI assembles from the truths established by every prompt it receives. It persists for as long as the codebase exists. It has no specific connection to code — it’s not a map of files or functions. It’s a record of what the system is supposed to do and why. Architectural decisions would live here. Domain rules. Invariants. The stuff that should survive a complete rewrite.

The intent layer has to mesh with how humans actually think. Incomplete, associative, iterative. Not demand structure the human doesn’t have yet. Not wait for a complete spec. Just accumulate understanding over time, the same way a good colleague does.

The tools that figure this out will produce code that’s not just correct for the current prompt, but consistent with everything the system is supposed to be.