The real bottleneck in agentic AI isn’t protocol security, but legacy data systems built for human access. Here are the four structural shifts required to safely connect agents to enterprise data
There is a quiet crisis unfolding inside enterprise AI deployments right now, and it is not what the headlines say it is.

The headlines will tell you that MCP (Model Context Protocol) has a security problem. A recent disclosure revealed over 200,000 MCP-enabled servers exposed through a single stdio transport flaw. Security teams are scrambling. CISOs are issuing memos. Vendors are racing to patch CVEs.
But here is what those headlines miss: the security problem is a symptom. The real disease sits one layer deeper — in the enterprise data architecture that was designed long before AI agents existed and was never meant to serve them.
I have spent years designing data systems for enterprises at scale. And when I look at how organizations are deploying MCP today, I see the same pattern I have seen with every previous wave of enterprise technology: the new capability arrives, the infrastructure does not keep up, and failure is blamed on the new thing instead of the old foundation underneath it.
MCP is not the problem. The problem is that your data architecture was built for humans asking questions — not agents taking actions.
What MCP Actually Is (and Why It Matters More Than You Think)
If you have been following the AI space, you have likely encountered MCP described as “USB-C for AI” — a universal connector standard that lets AI models talk to tools, databases, and services through a single, standardized interface. That metaphor is useful, but it undersells the structural shift underway.
Before MCP, connecting an AI model to an enterprise tool required a custom integration for every model-to-tool combination. Ten AI applications and fifty data sources meant potentially 500 bespoke connectors — each with its own authentication logic, error handling, and maintenance burden.
MCP replaces that N×M problem with a single protocol layer. Anthropic introduced it in November 2024, and the adoption curve has been unlike anything I have seen in enterprise infrastructure. By April 2026, 78% of enterprise AI teams report at least one MCP-backed agent in production. The public MCP server registry has grown 7.8x in a single year to more than 9,400 servers. Every major AI lab — Anthropic, OpenAI, Google, Microsoft — now ships native MCP support.
Google’s Full Commitment at Cloud Next ‘26
The clearest signal of MCP’s trajectory came from Google at Cloud Next ’26. Google did not just add MCP support — it rebuilt its API infrastructure around it. Every Google Cloud service is now MCP-enabled by default. Developers can point Gemini CLI or any MCP-compatible agent at a single, globally consistent endpoint and access BigQuery, Google Compute Engine, Google Kubernetes Engine, Google Maps, Gmail, Drive, Calendar, and more, all through standard MCP interfaces.
Google extended this capability through Apigee, allowing enterprises to expose and govern their own developer-built APIs and third-party services as discoverable tools for agents. The Gemini Enterprise Agent Platform includes an Agent Registry — a unified directory for discovering and managing agents, MCP servers, and tools in one place. Model Armor provides inline content safety, actively defending against prompt injection and data exfiltration attacks.
L’Oréal, deploying Google’s stack at global scale, described the shift clearly: agents are “securely connected to our single sources of truth, including our Beauty Tech Data Platform and core operational applications” through MCP. That is what production-grade MCP looks like — tightly coupled to enterprise data systems, governed, and auditable.
The question is: is your data architecture ready to be connected to?
The Four Blockers the 2026 MCP Roadmap Named (and What They Are Really About)
The 2026 MCP roadmap, published by lead maintainer David Soria Parra in March 2026, is unusually candid. It names four enterprise deployment blockers as the protocol’s top priority areas:
- Audit trails and observability — end-to-end visibility into what an agent requested and what a server executed
- SSO-integrated authentication — moving away from static client secrets toward identity-provider-managed access flows
- Gateway and proxy behavior — defined semantics for routing, authorization propagation, and session management through intermediaries
- Configuration portability — the ability to configure a server once and have it work across different MCP clients
Read those four items carefully. They are not protocol problems. They are data architecture problems that the protocol is now exposing.
Audit trails fail not because MCP lacks a logging spec, but because the underlying data systems were never designed to emit structured, agent-attributable event streams. Authentication breaks down not because OAuth is hard, but because enterprise data stores were built assuming human-initiated access flows — credential scoping was never designed for machine identities operating at agent speed. Gateway behavior is undefined not because the MCP community has not thought about it, but because most enterprise data architectures have no concept of an “agent access tier” between the application layer and the data layer.
Every one of these blockers has roots in how enterprise data was architected — before agents existed.
The Architecture Problem Nobody Is Talking About
Enterprise data architectures were designed around a fundamental assumption: a human being, or a human-initiated application, is the entity that asks for data.
This assumption is embedded in how we design access control (role-based, tied to human identity), how we structure query interfaces (request-response, synchronous, human-readable), how we think about audit trails (who logged in, what report did they run), and how we manage data lineage (which ETL pipeline moved what data when).
Agents violate every one of these assumptions simultaneously.
An agent does not log in once and run a report. It initiates dozens of data accesses per task, across multiple systems, on behalf of a user who may not be watching, in a pattern no human analyst would replicate, using credentials that are machine-issued rather than human-assigned.
When MCP brokers that access, it creates a new category of data consumer that most enterprise architectures have no model for. The result is what the WorkOS analysis of the 2026 roadmap describes precisely: “Teams building on MCP are stitching together custom logging, bolting on their own trace identifiers, and trying to reconstruct request chains after the fact.”
That is not an MCP problem. That is a data architecture debt problem, and MCP is the first tool powerful enough to make it visible at scale.
What a Data Architect Needs to Fix Before MCP Can Be Secured
Based on my work designing enterprise data systems, here are the four architectural changes that need to happen before MCP governance can work at scale.
1. Introduce an Agent Access Tier
Most enterprise data architectures have two access tiers: a raw data tier (data lakes, warehouses, operational databases) and an application tier (APIs, BI tools, dashboards). MCP-enabled agents need a third tier — an agent access tier — that sits between them.
This tier is not just an API gateway. It is a governed translation layer that enforces least-privilege tool scoping, routes agent requests through your existing identity provider, and emits structured observability events that feed your existing SIEM pipelines. Google’s Apigee implementation is a clear example of what this looks like at the infrastructure level. Enterprises need an equivalent internal to their own data architecture.
2. Redesign Data Identities for Machine Consumers
Human-centric RBAC was not designed for agents that operate continuously, autonomously, and across multiple systems in a single workflow. Enterprise data identities need to be extended to include workload identity — cryptographically-attested machine identities that are scoped per agent, per task, or per session, not per human employee.
This is not a new concept in infrastructure security. It is the same principle that drove the shift from long-lived API keys to short-lived JWT tokens in cloud-native systems. Enterprise data architectures now need to apply that same model to agent access. The MCP 2026 roadmap’s SSO integration work — specifically Cross-App Access — is the protocol’s answer. But the data layer has to be ready to consume those scoped tokens.
3. Instrument Data Sources for Agent-Attributable Observability
Most enterprise data systems emit logs tied to application sessions, not to individual agent actions within those sessions. When an agent running a thirty-minute autonomous workflow calls your data warehouse fourteen times across three tables, your current logging infrastructure likely records one session, not fourteen attributed events.
Agent-attributable observability means every data access event carries a trace identifier that links back to the originating agent identity, the triggering task, the authorizing user, and the policy that permitted the access. This is structurally similar to distributed tracing in microservices architectures — and it requires the same foundational instrumentation work. Google Cloud’s MCP implementation addresses this through Cloud IAM Deny policy for fine-grained access control, surfaced at the protocol layer. Internally, enterprises need equivalent instrumentation at the data source level.
4. Make Data Source Configurations Portable and Declarative
The MCP roadmap’s config portability requirement — configure once, run everywhere — is architecturally impossible if your data source configurations are hardcoded into application code or stored in application-specific secrets vaults.
This requires a shift toward declarative data source definitions: structured metadata describing a data source’s schema, access patterns, authentication requirements, and governance policies, stored in a format that MCP clients can consume without bespoke integration code. This is the same architectural principle behind data catalogs and data contracts — applied now to agent connectivity.

Why This Matters Now
MCP adoption is not slowing down. Google’s Cloud Next ’26 commitment, the Linux Foundation governance model, and the 78% enterprise production adoption rate make that clear. The 2026 MCP roadmap’s enterprise working group is actively building the governance infrastructure the protocol needs.
But protocol-level governance cannot compensate for data architecture that was never designed for agents. The enterprises that are going to struggle in 2026 and 2027 are not the ones that failed to patch their MCP servers. They are the ones that bolted MCP onto data architectures that were designed for a world where humans asked the questions.
MCP is ready for production. The question every Data Architect should be asking is: am I?
MCP Is Production-Ready. Your Enterprise Data Architecture Isn’t — Here’s What Has to Change was originally published in Google Cloud – Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source Credit: https://medium.com/google-cloud/mcp-is-production-ready-your-enterprise-data-architecture-isnt-here-s-what-has-to-change-77255be41782?source=rss—-e52cf94d98af—4
