
Context Is the Chassis: Why Your Agents Need Architecture Before They Need Intelligence
Last week I spent time with five different enterprises — financial services, insurance, education — all in the early stages of their AI agent initiatives.
What struck me wasn't what they'd built. It was the consistency of the challenges they were thinking through before scaling.
The pattern was remarkably consistent across all five. They're building agents for specific use cases — customer intelligence, process automation, compliance monitoring. The agents work in controlled demos. But when they try to scale them across the organization, they run into the same wall:
The agents don't understand the business context.
When someone asks "What was our revenue growth last quarter?" the agent can't answer. Not because the data doesn't exist — they have six different revenue tables across three warehouses. The problem is the agent doesn't know which definition of "revenue" to use. Is it ARR? Run rate? Which fiscal quarter? Which geography? The tribal knowledge that makes the question answerable lives in people's heads, not in systems.
This isn't an edge case. It's the rule.
The Enterprise Context Gap
A recent piece from a16z nailed the problem: "Most [AI deployments] fail due to brittle workflows, lack of contextual learning, and misalignment" with how businesses actually operate. The article argues that agents need more than data access — they need a modern context layer that captures business logic, canonical definitions, conditional rules, and the accumulated wisdom of how your organization actually works.
I agree completely. But I want to go deeper.
The problem isn't just that we need to add context to our agents. The problem is we've been building agents before building the context infrastructure they require. We've been putting engines in cars that don't have a chassis.
Let me explain what I mean.
Why Context Isn't a Feature — It's Architecture
When most enterprises think about adding "context" to their AI agents, they imagine it as a layer you bolt on. A semantic layer. A knowledge base. Some documentation that gets fed to an LLM.
But context at enterprise scale isn't a feature. It's architecture.
Here's why the a16z framework is exactly right, and why most enterprises struggle to implement it:
The five steps they outline:
- Data accessibility — Consolidate all relevant sources
- Automated context construction — Extract patterns from queries, definitions, usage
- Human refinement — Add conditional, tribal knowledge
- Agent connection — Expose context via APIs
- Self-updating flows — Continuously refine based on reality
Look at that list. Every single step requires platform capabilities, not just AI features. You need:
- A way to model your business logic (not just store it in text)
- A way to consolidate data without creating another silo
- A way to capture human refinements systematically
- A way to govern what agents can access
- A way to observe when context drifts from reality
This is infrastructure. This is the chassis.
What Context Architecture Actually Looks Like
Let me share a concrete example of what this looks like in practice. I work on enterprise application platforms, so I've had the chance to think through these problems from first principles. This isn't the only way to solve it, but it's one approach that addresses all five layers the a16z article describes.
1. The Model as Context Structure
At the foundation, you need what I think of as an executable business model. Not a database schema. Not a semantic layer. A structured, executable representation of your entire application — data structures, business logic, access controls, integrations, UI, workflows.
Why does this matter for context?
Because the model IS the context. When you define an entity called "Revenue" in this model, you're not just creating a database table. You're declaring:
- What constitutes revenue (the data structure)
- Who can access it (security model)
- How it's calculated (business logic)
- Where it comes from (integrations)
- How it changes over time (lifecycle)
The model doesn't just describe your business context. It is your business context.
This is critical. When an agent needs to understand "What is revenue?" it's not parsing documentation or guessing from column names. It's reading a structured, governed, executable definition that's kept in sync with reality because it's what runs your business.
(In the platform I work on, we call this OML — OutSystems Model Language. But the concept applies regardless of implementation: you need executable models, not just documentation.)
2. Virtualized Data Access as the Accessibility Layer
The a16z article starts with data accessibility — consolidating sources. This is where most enterprises get stuck. They try to build a data warehouse, or a data lake, or (God help them) a data mesh. They end up with another silo.
A better approach: virtualized data access.
Instead of moving data into yet another warehouse, create a virtualized layer across your existing sources. Think of it as a context broker. When an agent needs customer data, it doesn't query seventeen different systems. It queries this layer, which knows:
- Where the canonical customer record lives
- Which systems have supplementary data
- How to resolve identity across sources
- What the business definitions are
But here's the key: this virtualization layer isn't separate from the model. It uses the model to understand what "customer" means, what relationships exist, what access controls apply. The context layer and the data layer are unified.
This is what the a16z piece means by "consolidated sources" — but consolidation doesn't mean copying. It means governed access with unified context.
(We call this Data Fabric in our platform. Other implementations might use different terms — the point is virtualized, model-aware data access.)
3. Semantic Search as Context Discovery
Here's where it gets practical: semantic, model-aware search.
The scenario: You have thousands of customer records, product descriptions, support tickets, process documentation — all sitting in your databases. Traditionally, an agent searching for "customers with unresolved billing issues in Q4" would need exact SQL queries with perfect schema knowledge.
With semantic search built on top of the model, the agent describes what it needs in natural language. The system:
- Understands the semantic intent
- Maps it to the underlying Model structure
- Retrieves contextually relevant data
- Returns results that honor your governance model
This is the "automated context construction" step from the a16z framework. But because it's built on top of the model, it's not just keyword matching. It's context-aware retrieval.
The agent doesn't just find records with the word "billing" — it finds records that fit the business definition of unresolved billing issues as defined in your model.
(We just shipped this as Semantic Search in our platform. But the pattern is universal: semantic search needs to be model-aware, not just text-matching.)
4. Event-Driven Context Updates
The last piece is what the a16z article calls "self-updating flows" — the ability for your context layer to evolve as your business changes.
You need event-driven architecture at the database level. Here's why this matters for context:
When data changes in your system — a customer status update, a new product launch, a policy change — Event Triggers fire automatically. They can:
- Update dependent calculations
- Notify downstream agents
- Trigger workflow automations
- Log context changes
This creates a living context layer. When your business definition of "active customer" changes (maybe you shift from 90-day to 30-day activity windows), that change propagates through:
- The model definition
- The semantic search indices
- The virtualized data views
- The event-triggered workflows
Your agents don't work with stale context. They work with reality.
(We recently shipped Database Event Triggers in our platform to enable this. Other platforms might implement it differently — the key is that context updates propagate automatically when data changes.)
5. The Agent Development Layer
All of this infrastructure exists to enable one thing: agents that actually work.
The final layer is where business users and developers actually build agents — but they're building on top of this context foundation.
When you build an agent on this architecture:
- It inherits the model's business logic and governance
- It queries through the virtualized data layer's unified context
- It uses semantic search for natural language understanding
- It responds to events for real-time adaptation
You're not building an agent from scratch. You're building an agent on top of established enterprise context infrastructure.
This is the difference between "a smart chatbot" and "an enterprise agent that ships value."
(The layer we provide for this is called Agent Workbench. But the principle applies to any agent development environment: it needs to be built on context infrastructure, not trying to create context on the fly.)
What the a16z Article Gets Right (And What It Misses)
The a16z framework is exactly right about the problem: agents need context, and context requires human refinement, automated construction, and continuous updates.
But here's what the article treats as steps, and what I've learned building enterprise platforms: these aren't sequential steps. They're simultaneous requirements.
You can't build data accessibility, then add context, then add governance, then add agents. You need architecture that provides all of these as integrated capabilities from day one.
That's why I've been using the chassis metaphor with customers.
You wouldn't build a car by:
- First getting an engine
- Then figuring out where to put the wheels
- Then adding steering
- Then trying to make it safe
You'd design a chassis that integrates all of these from the start.
The same is true for enterprise AI. Context isn't a layer you add. Context is the foundation you build on.
The Competitive Implications
Here's where this gets strategic.
If context is infrastructure, and infrastructure has increasing returns to scale (the more you use it, the more valuable it becomes), then context becomes a moat.
Think about it:
- Every agent you build adds to your model
- Every workflow adds business logic
- Every integration adds data relationships
- Every user interaction refines the context
Five years from now, the enterprises that win with AI won't be the ones with the best LLMs. They'll be the ones with the best context infrastructure.
The LLMs are commoditizing. The context is defensible.
This is why the "build vs. buy" debate for AI platforms misses the point. You're not choosing between building an agent or buying an agent. You're choosing between building context infrastructure or leveraging existing context infrastructure.
If you build agents on top of fragmented systems, you're building on sand. If you build agents on top of unified context architecture — a Model, a Fabric, governed search, event-driven flows — you're building on a foundation that compounds.
What This Means for Your Enterprise
If you're leading AI transformation (or thinking about it), here are the questions to ask:
1. Do you have a unified model of your business logic? Not documentation. Not a wiki. An executable, governed model that defines how your business works.
2. Can your agents access context without creating new silos? Data Fabric, not data warehouse. Unified access, not more copies.
3. Can your agents discover what they need semantically? Natural language context retrieval, not just SQL queries.
4. Does your context layer update automatically when business changes? Event-driven flows, not manual documentation updates.
5. Is context infrastructure, or is it a bolt-on? If your agents break when business logic changes, context is a bolt-on. If your agents adapt when business logic changes, context is infrastructure.
The Path Forward
I've said it before, and I'll keep saying it: Magic doesn't scale.
"Vibe Coding" agents that generate code from vibes crash at 1,500 lines. Data agents that guess at business definitions crash at the first real question. Workflow agents that don't understand conditional logic crash when the business changes.
What scales is architecture. What scales is context as infrastructure.
The a16z article is right to call for a modern context layer. I'm arguing we need to go further: we need context architecture built into the platform, not bolted onto fragmented systems.
My bet is that the enterprises that build on unified context infrastructure will move 10x faster than the ones trying to retrofit context onto legacy systems or point-solution agents.
The chassis matters more than the engine.
Context is the new moat.
What's your take? Are you building context as infrastructure, or trying to bolt it on later? I'd love to hear how enterprises are tackling this problem.
#AgenticAI #EnterpriseAI #DataArchitecture