Skip to content

Instantly share code, notes, and snippets.

@breadchris
Created February 17, 2026 23:12
Show Gist options
  • Select an option

  • Save breadchris/50928d8c6f279ac30959a6bb8b6bf3ca to your computer and use it in GitHub Desktop.

Select an option

Save breadchris/50928d8c6f279ac30959a6bb8b6bf3ca to your computer and use it in GitHub Desktop.
write specs, not chats.

You’ve probably had this experience.

You open an AI tool, type a reasonable request, and get back something that is technically correct and practically useless. You refine it. It improves. You refine it again. It improves further. But you can’t quite tell why.

The problem is rarely just the model. It’s the mental model.

If you treat AI like a search engine, you ask for facts. If you treat it like an oracle, you ask for answers. If you treat it like a conversation partner, you negotiate through dialogue.

But large language models are generative systems that respond to constraints. Every message you send modifies the active specification: tone, scope, assumptions, structure, audience.

Consider a simple task.

“Plan a weekend in Chicago.”

You’ll get something generic.

Now try:

“Plan a low-cost weekend in Chicago for two people who like architecture and quiet neighborhoods. Include one long walk, one museum, and one local restaurant under $30 per person. Avoid tourist-heavy areas.”

The system hasn’t changed. The constraints have. The output improves because the specification improved.

This is true far beyond travel planning. Writing an essay. Drafting feedback. Designing a feature. The quality of the output is tightly coupled to how clearly you define what you want.

Most chat interfaces, however, obscure this reality. Constraints accumulate invisibly in the conversation history. When results shift, users don’t know which assumption is driving the change.

If AI systems are fundamentally constraint-sensitive, then their interfaces should reflect that. Instead of treating the interaction purely as conversation, they could surface the active specification: Who is this for? What structure is expected? What must be included? What assumptions are allowed?

Small design choices — making audience explicit, allowing structure templates, summarizing current goals — would make the invisible constraints legible. Users wouldn’t just “ask better questions.” They would see and edit the spec the model is trying to satisfy.

The shift, then, is twofold. Users need a better mental model: not AI as oracle, but AI as synthesis engine under constraint. Designers need to build tools that acknowledge that reality.

The quality of the output is not just a function of model intelligence. It is a function of specification clarity — and whether the interface helps you achieve it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment