Gonçalo Borrêga
← Writing
From text to code to models (AI generated... obviously :-) )

The "Text-to-Code" Wall: Why Visual Models Are the Missing Link in Agentic AI

I want to share an honest conversation about "Vibe Coding" in the enterprise that I've been having with myself.

While I don't code daily, I'm still quite involved in projects that require scale, proper architecture and, above all, are not one-shot apps — they need maintenance and evolution, continuously. Like any enterprise software.

I love AI-development. It is the single biggest productivity lever we have seen in decades. Watching a junior developer build a React app in 20 minutes by chatting with an LLM is inspiring. It feels like magic. Building CSS that gets me a screen better looking than what I can describe to my designer... even better.

But as I recently told my team: Magic doesn't scale.

When you move from a weekend prototype to a mission-critical enterprise system, "text-to-code" hits a hard wall. I was recently speaking with a peer — a Head of Development at a massive European SaaS company — who has been deploying Agentic development at scale.

His feedback? It's a mess.

They tried having AI generate code directly from user stories. The result? A single simple user story generated 1,500 lines of code. It worked, technically. But the moment they needed to version it, roll it back, or have a human collaborate on it, the system collapsed under its own weight.

The problem isn't the AI. The problem is the interface.

The Verification Bottleneck

Andrej Karpathy, one of the leading minds in AI, recently hit the nail on the head regarding this friction. He pointed out that while AI makes generation instant, human verification is the bottleneck.

If an AI generates a 1,500-line diff in seconds, you are the bottleneck. You have to read it. And as Karpathy noted, reading text-based code is "effortful." It is slow, serial processing.

His solution? Visuals.

Karpathy called GUIs and visual representations a "highway to the brain." Visuals allow us to audit the system instantly. You can scan a visual flow and understand the logic in seconds, whereas reading the equivalent Python or C# takes minutes or hours.

But this goes deeper than just "reading speed." It goes to the very core of how humans think about software.

The Two Brains: Declarative vs. Procedural Thinking

In my years managing R&D and product teams, I've noticed a fundamental divide in how humans conceptualize software. There is a mental model gap that traditional coding (and now GenAI) struggles to bridge.

There are two types of thinkers in every organization:

  1. The "Excel Mind" (Declarative): These are often business analysts, product managers, or domain experts. They think in outcomes and relationships. They can build a massive, complex ERP system entirely inside an Excel spreadsheet. They don't care how the calculation happens step-by-step; they care that "Cell C1 = A1 + B1." This is Declarative thinking. The rules are encoded in the model itself.

  2. The "Algorithm Mind" (Procedural): These are the architects and engineers. If you ask an experienced OutSystems developer to describe a system, they won't describe a spreadsheet. They will describe a Flow. They say: "First we call that API. If policy has coverage, we go right. If it does not, we loop into the extended coverages, then go left to sum everything up." This is Procedural thinking. It is an algorithm.

The Trap of Text-to-Code

Here is where "Vibe Coding" fails the enterprise. You usually prompt the AI in declarative language ("Build me an app that tracks inventory"). The AI then dumps out a massive block of procedural text code (Python/JS).

The "Excel Mind" can't read the code because it's too abstract. The "Algorithm Mind" hates the code because it's a black box of 1,500 lines that is impossible to debug mentally.

The Model as the Universal Translator

This is why I believe in the power of Visual Models (DSLs). A visual domain-specific language is the only medium that respects both mental models simultaneously.

  • For the Declarative Thinker: The DSL exposes the data model, the entities, and the "rules" clearly. It looks like the system they imagined.
  • For the Procedural Thinker: The DSL visualizes the logic flow. You can literally see the loop, the decision diamond, and the integration point.

The Invisible Graph (What You See vs. What You Build)

However, we need to clarify a common misconception. When we talk about "Low-Code" or "Visual Development," people obsess over the drag-and-drop editor.

But the visuals are just the view. The real power — the "Chassis" that keeps the car from falling apart — is the Underlying Model.

Text-based code is essentially a bag of loose files. A script in one folder doesn't intrinsically "know" about a database schema in another folder until runtime. If an AI generates code that breaks a dependency, you might not find out until the build fails—or worse, when production crashes.

A Structured Model is different. It is a graph.

When our platform operates, it isn't just drawing pictures. It is managing a massive, invisible web of dependencies between every component, service, and data point in your portfolio.

  • It knows that Table A is used by Agent B.
  • It knows that API C requires a specific security role used in Workflow D.

This is the "Change Safety" that text-based AI cannot replicate.

When an AI Agent updates our Model, it isn't just writing syntax. It is updating a node in that graph. The system immediately calculates the impact. If the AI tries to delete a field that is used by a mobile app three layers deep, the Model rejects it before it ever becomes a bug.

This is why we can support "Executable Specs." We aren't generating disposable code; we are evolving a living graph of enterprise knowledge.

The "Spec Factory"

We are building towards a future where the AI operates as a Spec Factory.

When you interact with the IDE of the future, you aren't "coding." You are validating.

  1. You describe the intent (Conversational).
  2. The AI updates the Model (The Metadata/Graph).
  3. The IDE reflects those changes Visually (for human verification).

This distinction matters because it solves the "1,500 lines of code" problem.

When the AI updates a spec, it isn't writing throwaway code. It is manipulating a structured, hierarchical set of assets. If the AI gets it wrong, you don't have to rewrite the prompt and hope for a better random seed. You look at the visual representation, see the logic flaw, and correct it.

This allows for Configuration and Security (by default, encoded in the platform), Versioning (with changes that our human brains can understand), and Rollback (without having to merge across thousands of AI-changed lines of code leaving the system unstable). These are the three pillars of enterprise engineering that "Vibe Coding" ignores.

Conclusion

I've said it before: AI is the engine. But you cannot drive an engine down the highway at 100mph. You need a chassis; a frame that handles security, governance, and architecture.

The Model is that chassis.

If we want to move from "Text-to-Code" to "Enterprise Reality," we have to stop treating code as the goal. Code is a liability. It's hard to read, hard to manage, and hard to verify.

The goal is the System. And the best way to describe a System is a Model.


Originally published on LinkedIn, January 25, 2026.

← All writing