
Your AI agent just got deployed to production. How do other agents find out what it can do? How does it discover their capabilities? How can agents negotiate authentication without a human setting up API keys?
These aren’t new questions. The web has been solving discoverability problems for decades. The patterns that emerged are now shaping how AI agents find, trust, and talk to each other. Here’s how we got here, where we are, and where it’s going.
https://medium.com/media/224d56b836787dbabbcac7ca5e36cbbf/href
The past: How the web learned to introduce itself
Every protocol needs a way to say “here’s what I support.” Early on, the web handled this in an ad-hoc fashion with robots.txt, favicon.ico, crossdomain.xml each dropped into the web root without coordination.
RFC 5785 fixed this by reserving /.well-known as a single prefix for site-wide metadata, backed by an IANA registry to prevent naming collisions. RFC 8615 (2019) superseded it with broader URI scheme support and tighter registration rules.
The pattern was simple and effective: put a machine-readable document at a predictable URL so clients can discover capabilities before establishing a connection. That same pattern is now the foundation for agent discovery.
The present: Agent discovery in production
A2A Agent Cards
In April 2025, agent-card.json became the first AI-agent-specific entry in the IANA .well-known registry. It was registered under the Linux Foundation as part of the Agent2Agent (A2A) protocol, which hit Release Candidate v1.0 in January 2026.
An Agent Card is a JSON document that tells other agents: here’s my name, here’s what I can do, here’s how to talk to me.
{
"name": "travel-planner",
"description": "Plans multi-city trips with flights and hotels",
"url": "https://travel.example.com/a2a",
"version": "1.0",
"capabilities": { "streaming": true, "pushNotifications": false },
"skills": [{
"id": "plan-trip",
"name": "Trip Planner",
"description": "Creates optimized multi-city itineraries",
"tags": ["travel", "planning"],
"examples": ["Plan a 5-day trip to Tokyo and Kyoto"]
}],
"authentication": { "schemes": ["bearer"] }
}
A client agent fetches this, parses your skills and auth requirements, and submits tasks to your service endpoint via JSON-RPC 2.0. The card is the integration contract.
Note that before A2A Version 0.3.0, the path was /.well-known/agent.json, but the IANA registration is under agent-card.json. Some implementations serve one, some serve both, some redirect. If you’re writing a discovery client, you probably need to check both paths and handle redirects gracefully. The A2A spec itself acknowledges this. The Agent Card URL can actually be anywhere, and /.well-known/ is just the convention for public, unauthenticated discovery.
With the Google Agent Development Kit, you have two paths:
- to_a2a() auto-generates a card from your agent code. This is useful for prototyping, but keep in mind that the output is automatically determined. You may want to hand-tune the descriptions and examples before going to production.
- adk api_server — a2a serves a manually-authored agent.json from your project directory.
Vertex AI Agent Engine supports A2A natively, so once you deploy there, your agent gets sessions, memory, tracing, and monitoring without extra wiring. The A2A Purchasing Concierge codelab walks through the full publish-and-discover flow on Cloud Run and Agent Engine.
On A2A vs. MCP: A2A is for agent-to-agent (“I need a travel agent to help with this trip”). MCP is for agent-to-tool (“Call the flights API with these parameters”). In practice, the harder question is: when does a “tool” become complex enough that it should be an “agent”? If your tool needs multi-turn conversation, error recovery, or its own sub-agents, A2A is probably the right fit. If it’s a stateless function call, MCP.
UCP Commerce Manifests
The Universal Commerce Protocol (UCP) launched in January 2026, co-developed by Google with over 20 partners including Shopify, Walmart, Target, Stripe, Visa, and Mastercard. It’s narrower than A2A, specifically about agentic commerce. The problem it solves is when your AI shopping agent encounters a merchant it’s never integrated with. Without UCP, you’d need a custom integration for every retailer. With UCP, the agent fetches /.well-known/ucp and gets a manifest of supported commerce operations: product search, inventory, cart, checkout, order tracking, along with the endpoints and signing keys for each.
The pragmatic design choice here is that UCP bundles payment authorization through the Agent Payments Protocol (AP2). Merchants declare AP2 support directly in their UCP manifest via a dev.ucp.shopping.ap2_mandate capability. AP2 uses cryptographically signed Mandates, essentially tamper-proof receipts of user consent, and Verifiable Digital Credentials to prove a human actually authorized a purchase.
The future: What’s being proposed
MCP Server Cards
Right now, connecting to an MCP server means hardcoding a URL or manually adding it to your client config. There’s no equivalent of the Agent Card for MCP. You can’t just point at a domain and say “what MCP tools do you offer?”
SEP-2127 is the formal Draft proposal to fix this. It introduces MCP Server Cards — structured metadata documents that describe an MCP server’s capabilities, transports, auth requirements, and available primitives, all before you establish a connection. The proposed path is /.well-known/mcp/server-card, and servers can also expose the same data post-connection as an MCP resource at mcp://server-card.json.
// /.well-known/mcp/server-card
{
"name": "io.modelcontextprotocol.anonymous/weather-server",
"version": "1.0.0",
"description": "Real-time weather data for any location",
"remotes": [
{
"type": "sse",
"url": "https://weather.example.com/sse",
"supportedProtocolVersions": ["2024-11-05"],
"authentication": { "required": true, "schemes": ["bearer"] }
}
],
"capabilities": {
"tools": { "listChanged": true },
"resources": { "subscribe": true }
},
"tools": [
{
"name": "get_forecast",
"description": "Get weather forecast for a city",
"inputSchema": {
"type": "object",
"properties": { "city": { "type": "string" } }
}
}
]
}
The Server Card schema mirrors the MCP Registry’s server.json format for ecosystem compatibility. A remotes array declares HTTP-based connection options (Streamable HTTP, SSE) with per-remote authentication and supported protocol versions. A packages array covers local installs (npm, pip, Docker) with environment variables and runtime arguments. Tools, resources, and prompts can be listed statically in the card or marked “dynamic” to indicate they must be discovered at runtime.
For agent developers, this gap means you configure MCP tools manually today. Once Server Cards land, agent frameworks could auto-discover available tools from a domain (similar to how to_a2a() auto-generates a card today but in reverse: consuming instead of publishing).
AI Catalog and Unified AI Card
A2A assumes one agent per host. Early MCP proposals assumed one server per host. But what about a platform that runs multiple agents and multiple MCP servers on the same domain?
PR #4 in the AI Card working group proposes a cohesive solution: AI Catalog for discovery and Unified AI Card for identity.
The AI Catalog, hosted at /.well-known/ai-catalog.json, is the single entry point. Instead of forcing a client to probe multiple protocol-specific paths, the catalog provides a unified list of pointers to all AI services on the domain.
Each pointer leads to a Unified AI Card (ai-card.json). This is the identity document for the service. It lifts common metadata (identity, trust, publisher info) into a shared layer. The Unified AI Card acts as a container for detailed protocol definitions (MCP, A2A, etc.), including emerging standards like the MCP Server Card within its protocols map.
The architecture separates discovery (finding the services via Catalog) from detailed inspection (checking capabilities via Card), allowing efficient crawling without fetching megabytes of metadata for every service.
Here’s a glimpse of what that looks like in the current draft. The Catalog is a lightweight index:
// /.well-known/ai-catalog.json
{
"specVersion": "1.0.0",
"services": [
{
"id": "did:example:agent-finance-001",
"name": "Acme Finance Agent",
"cardUrl": "https://api.acme-corp.com/agents/finance/ai-card.json"
}
]
}
And the Card unifies the specific protocols under one identity:
// https://api.acme-corp.com/agents/finance/ai-card.json
{
"id": "did:example:agent-finance-001",
"identityType": "did",
"publisher": { "name": "Acme Financial Corp" },
"protocols": {
"mcp": {
"type": "mcp",
"protocolSpecific": {
"transport": { "type": "sse", "endpoint": "..." },
"capabilities": { "tools": { "..." } }
}
},
"a2a": {
"type": "a2a",
"protocolSpecific": {
"url": "https://api.acme.com/a2a"
}
}
}
}
Note: This specification is currently in active development. Details like field names (e.g., services vs interfaces) and structural decisions are being discussed in the working group. Treat this as a directional look at the architecture rather than a finalized spec ready for implementation.
The open question
Right now, a full-featured agent domain could need to serve agent-card.json, ucp, potentially mcp/server-card, plus the various OAuth endpoints. Are we heading toward a future where agents make five discovery fetches before saying hello?
Maybe. The composable design of these protocols (A2A for coordination, UCP for commerce, MCP for tools, OAuth for auth) is elegant in theory but chatty in practice. The AI Catalog is the proposed answer to this friction, attempting to consolidate discovery into a single fetch. Whether this unified approach takes hold, or whether we accept the multi-fetch pattern as the cost of modularity, is one of the key open design questions in this space.
What to do about it
If you’re standing up an A2A server today, publish an agent-card.json. ADK’s to_a2a() will get you a working card for prototyping; write one by hand for production. If you’re building a client that discovers agents, consume agent-card.json — and handle the agent.json path variant while you’re at it. Either way, you’ll need openid-configuration and the oauth-* endpoints sorted out, since both A2A and MCP lean on OAuth for authentication.
If you’re a merchant or building commerce agents, UCP is the next priority. Add a security.txt while you’re at it. It’s an RFC, it takes minutes to write, and it supports a more secure agent ecosystem.
For the future, keep an eye on both SEP-2127 for MCP-specific discovery and PR #4 in the Agent-Card/ai-card repo for the Unified AI Card and AI Catalog.
I’d love to hear about your perspective on the future of agent discovery and interoperability. Connect with me on LinkedIn, X, or Bluesky to continue the conversation.
What are the Trends in Agent Discoverability and Interoperability? was originally published in Google Cloud – Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
Source Credit: https://medium.com/google-cloud/what-are-the-trends-in-agent-discoverability-and-interoperability-91865e098365?source=rss—-e52cf94d98af—4
