Skip to content

Instantly share code, notes, and snippets.

@lucis
Created November 19, 2025 14:42
Show Gist options
  • Select an option

  • Save lucis/63acf6ae75035ff33e7b4d4835c160cc to your computer and use it in GitHub Desktop.

Select an option

Save lucis/63acf6ae75035ff33e7b4d4835c160cc to your computer and use it in GitHub Desktop.

Schema as Living Architecture: From Static Diagrams to Executable Blueprints

When infrastructure becomes commodity and AI can generate implementation, what remains is the essential work of modeling reality itself. This synthesis connects three threads: philosophical foundations (why schemas are ontological commitments), technical validation (how the industry is proving it works), and a concrete path forward (SchemaOS as living architecture).

The Shift: From Drawing Boxes to Living Systems

Software architects spend too much time drawing static boxes and lines. The problem isn't lack of sophistication—it's that our description tools are disconnected from the systems they describe. Architecture as documentation decays the moment code changes. We need living blueprints that the system actually uses.

This isn't just a tooling problem. It's a fundamental shift in how we build software, driven by three converging forces:

Infrastructure commoditization: CRDTs (Yjs, Automerge) eliminated sync algorithms. Local-first (Linear, Figma) solved offline. Real-time frameworks (Liveblocks, Convex) made multiplayer trivial. Event sourcing and temporal databases made history queryable. These used to be PhD-level problems; now they're libraries. The technical complexity that dominated decades of software engineering is being abstracted away.

AI generation maturity: 25% of Y Combinator Winter 2025 startups had codebases 95% AI-generated. PMs prototype full features without engineers. Gemini 2.0, Claude Sonnet, GPT-4 understand context deeply enough to generate production code. But unstructured prompting produces inconsistent results. The breakthrough: structured prompts from domain models produce predictable, governable generation.

Verifiability as automation predictor: Andrej Karpathy's insight—Software 2.0 doesn't automate what's specifiable, it automates what's verifiable. Math, code, and structured tasks advance rapidly because outputs can be checked automatically. This explains why schema-first development works: schemas provide verification criteria that enable AI optimization.

When these forces converge, architecture shifts from documentation to executable specification. Configuration and data drive the system, making architecture inspectable and modifiable at runtime.

What Remains Essential: Ontological Modeling

As infrastructure commoditizes, domain modeling emerges as the irreducible work. This isn't just software engineering—it's applied ontology:

Defining what exists: In Domain-Driven Design terms—bounded contexts, entities, aggregates, value objects. In philosophical terms—ontological commitments about reality. When you design a schema, you're not just creating a database; you're declaring: "These concepts exist, these relationships matter, these invariants hold." This is knowledge representation, the work that cannot be automated away.

Process vs substance: Event sourcing isn't just a technical pattern—it's a philosophical stance that events are more fundamental than state, echoing Alfred North Whitehead's process philosophy. The choice between state-based (traditional CRUD) and event-based systems isn't technical; it's ontological. Some domains (finance, compliance, audit) naturally think in events. Others (catalogs, inventories) think in current state. Architecture must match the domain's natural ontology.

Temporal semantics: Bitemporal modeling (valid time vs transaction time) formalizes that systems must track not just "what is true" but "what we knew when about what was true then." This isn't overengineering—it's recognizing that time is fundamental to domain modeling. SQL:2011 standardized it; Datomic made it elegant; modern compliance demands it.

The philosophical depth matters because it provides clarity about what can and cannot be abstracted. CAP and PACELC theorems prove that trade-offs between consistency, availability, and latency are domain-dependent, not solvable by better infrastructure. Event sourcing introduces operational complexity that tools can reduce but not eliminate. Understanding this prevents overselling "final abstractions" while focusing effort where value actually lies.

Industry Validation: Schema as Infrastructure

The vision of schema as living infrastructure is being validated across the industry:

Type systems as semantic infrastructure: Modern frameworks increasingly treat type definitions not just as validation but as executable specifications. TypeScript interfaces drive runtime behavior, generate UI components, enable automatic API discovery, and provide type-safe data flow. The schema isn't translated—it IS the executable specification.

Schema-first platforms proving product-market fit: Supabase ($5B valuation, 4M+ developers) proves demand for schema-driven backends. Hasura and PostgREST demonstrate GraphQL/REST generation from schemas. Temporal's workflow-as-code shows executable specifications at scale. Prisma's schema-to-ORM validates developer appetite for declarative data modeling.

AI code generation adoption: Tools like Cursor, v0, and Lovable show explosive growth in AI-assisted development. But the gap is clear: unstructured prompting produces inconsistent results. The opportunity: schema-first AI generation where domain models provide structured context for predictable, verifiable code generation.

The innovation across these examples: using type systems and schemas as universal interoperability protocols. You specify semantics once; systems infer UI, APIs, validation, integrations, deployment. The technical stack represents commodity infrastructure. The value is the semantic layer of shared understanding.

This validates that schemas can be living architecture. The opportunity is generalizing this pattern: not just for specific verticals or layers, but as a comprehensive development paradigm spanning backends, data pipelines, workflows, and enterprise systems.

SchemaOS: Living Schema as Operating System

The vision emerging from industry trends and architectural evolution:

Core Thesis

In Software 2.0, schema is always in motion because AI needs to understand and adapt domain models. Editing schema = editing prompts = editing agent behavior. There's no meaningful distinction between "schema design" and "prompt engineering" and "code generation"—they're the same work at different abstraction levels.

The Three Primitives

1. Schema as semantic graph: Not just tables—entities, relations, aggregates, events, invariants. ReactFlow canvas for visual design. Text DSL for power users (version control, expressiveness). Bidirectional sync. Schema is simultaneously human-readable (domain expert collaboration), machine-readable (AI consumption), and executable (runtime validation).

2. Bindings as capability protocol: Common signatures that heterogeneous systems implement—Pagination (OData-like), WebhookSource, BulkExport, Search. A binding declares "this external system provides these capabilities with this shape." Schema editor shows available bindings from workspace integrations (Airtable, Shopify, Stripe). Users wire external data into domain model. AI generates glue code just-in-time based on binding signatures + schema context.

3. Workflows as stateful orchestration: Separate app but same semantic foundation. Schemas define "what exists," workflows define "what happens." Built-in webhook triggers for data changes. Automatic sync strategies (bulk, cron, indexing) for systems without webhooks. Standard patterns for relations, data augmentation, transformations. Workflow engine generates code based on schema context—no manual object mapping.

The Radical Simplicity

You model domain once (schema), declare external dependencies (bindings), specify business logic (workflows). System generates everything else—APIs, UIs, validations, integrations, deployment configs. Not code generation as "one-time export" but continuous synchronization where schema remains source of truth.

AI Integration as Foundation

Not bolted-on copilot but core to architecture. Schema provides structured context for AI:

  • "Given these entities, suggest fields"
  • "Given this relationship, generate validation logic"
  • "Given these bindings, write integration code"

Verification built-in: generated code must conform to schema. Iteration predictable: change model, regenerate, validate automatically. This is Karpathy's verifiability loop—structured specification enables optimization.

Architecture Patterns That Emerge

Progressive disclosure in UX: Simple mode for 80% cases (basic entities, relations, validation). Advanced mode for edge cases (complex invariants, temporal logic, mereological constraints). Escape hatch to custom code when needed. Templates for common domains (e-commerce, CRM, project management). Users start with template, customize specifics.

Schema evolution first-class: Explicit versioning. Migration paths between versions automatic. Backward compatibility checks. Forward compatibility optional with explicit declaration. When schema changes, tool generates migrations (database, code, config). User reviews before applying. Rollback capability. Time-travel debugging: "Show application as it was with schema v1.2."

Multi-representation: Same schema renders as visual graph (ReactFlow canvas), text DSL (version control friendly), generated documentation (API specs, ERDs), executable code (type definitions, validation functions), configuration (deployment, infra). Change one, all others update. Single source of truth with multiple projections.

Collaborative ontology: Real-time multi-user editing (operational transforms / CRDTs). Comments and annotations on entities, fields, relationships. Version history and diffs. Integration with domain modeling practices (Event Storming sticky notes → schema entities, Domain Storytelling pictograms → workflow sequences). Bridge between business stakeholders and technical implementation.

Market Validation

$100B+ combined opportunity across low-code/no-code ($67B by 2030, 20% CAGR) and Backend-as-a-Service ($23B by 2032).

Proven demand signals:

  • Supabase: $5B valuation, 4M+ developers—proving managed backend demand
  • Retool: $3.2B valuation, proving willingness to pay for rapid development
  • Bubble: $100M+ revenue, showing no-code market maturity

But clear gap exists:

  • Low-code platforms require manual UI assembly
  • Schema-to-API tools (Hasura, PostgREST) stop at data layer
  • AI coding tools (Cursor, Lovable) generate from unstructured prompts with inconsistent results
  • Domain modeling tools (Context Mapper) produce documentation, not applications

No tool connects: domain model → complete working application

Market readiness indicators:

  • 80% of top PMs use AI to prototype (saving 2-3 weeks per feature)
  • 41% of businesses have citizen development programs
  • 70% of new applications will use low-code by 2025

The appetite is proven. The missing piece: domain-model-first development where business concepts drive both schema and application.

Why Now, Why This Matters

Confluence of readiness:

Infrastructure problems solved enough (Supabase, Convex, PartyKit handle backend complexity). AI models capable enough (Gemini 2.0, Claude Sonnet, GPT-4 Turbo understand deep context). Developer experience expectations high enough (vibe coding, natural language interfaces normalized). Domain-Driven Design practiced enough (DDD Europe, Explore DDD show continued growth and evolution).

The fundamental change:

Software development moving from imperative ("how to implement") to declarative ("what we want"). Not "write for loop to iterate array" but "sort alphabetically and remove duplicates." Not "implement authentication middleware" but "require authenticated user with these permissions." Specification becomes the work; optimization through generation becomes the capability.

Living architecture emerges naturally:

When schema drives system behavior, architecture is no longer separate documentation. Architecture becomes inspectable, modifiable, executable at runtime. Schema reveals domain structure, state management, business rules, integration points.

But trade-offs remain real:

CAP/PACELC theorems prove impossible to abstract away. Event sourcing introduces operational complexity (schema evolution, coupling paradoxes). CRDTs have severe limitations (memory overhead, expressiveness constraints). Local-first breaks traditional security models. These aren't solved problems—they're fundamental choices that domain modeling makes explicit rather than hidden.

The opportunity isn't "infrastructure solved, only domain matters" but "infrastructure abstractions enable domain focus, while domain-infrastructure co-design remains critical." Tools must help architects navigate trade-offs, not claim to eliminate them.

Forward Path: Execution Requirements

The research validates opportunity but execution determines outcome. Key success factors:

1. Domain modeling UX dramatically better than code: Visual interface accessible to non-technical stakeholders. Text DSL for developers. Templates reduce barrier. If users can't create valid domain model in <30 minutes with template, UX isn't ready.

2. Generated code production-quality: Not just prototypes. Apps can deploy and run immediately without manual editing. Type-safe, validated, tested, documented. Integration with best-in-class infrastructure (Supabase backend, Vercel deployment) rather than reinventing.

3. Schema as structured prompt: Domain model provides rich context that dramatically improves AI generation quality. Enables automatic verification. Makes iteration predictable. Differentiates from unstructured AI tools where results vary wildly.

4. Open source core + cloud platform: Supabase model validated—core open source (community, trust), cloud hosting as revenue (convenience, collaboration). Freemium pricing: individual free, team $20/user/month, enterprise custom.

5. Intellectual honesty about limitations: Not claiming "final abstraction" but "this abstraction fits these domains well, has these trade-offs, requires these skills." History shows each abstraction layer helps but reveals new complexity. Acknowledge CAP trade-offs, event sourcing challenges, AI evolution.

The Paradigm Shift

The convergence is real:

  • Infrastructure commoditizing
  • AI maturing
  • Verification enabling automation
  • Domain modeling emerging as essential work
  • Market ready with $100B+ opportunity and clear gaps

The opportunity: build architecture that isn't separate from system but IS the system. Schema as source of truth. Workflows as orchestration. AI as generation capability. Living blueprints that remain synchronized with reality because they're literally driving reality.

This requires combining:

  • Deep architectural thinking (ontology, trade-offs, patterns)
  • Excellent UX (make complexity accessible)
  • AI integration (structured generation and verification)
  • Pragmatic engineering (build on proven infrastructure rather than reinvent)

Not just better tools for drawing boxes and lines. But a fundamentally different way of building software where architecture is living, executable, and essential rather than static, aspirational, and decaying.


This document synthesizes research on infrastructure commoditization, AI-driven development, domain modeling, and market opportunities. It proposes SchemaOS as a paradigm where schema becomes the operating system of software development—driving behavior, enabling generation, and remaining synchronized with implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment