The catalog isn't the market
A procurement agent runs a sourcing task. It needs commodity pricing data. Dozens of APIs exist for this. It queries one, the one hardcoded into its config by the developer who built it. The others don’t exist as far as it’s concerned. It can connect to anything. It just doesn’t know anything else is there.
Protocols determine how agents talk to services they’ve found. Discovery determines whether they find them at all. MCP gave agents a standard way to connect to tools: one integration instead of a week of custom engineering per service. Twenty thousand implementations in fourteen months. The protocol layer is converging. But an agent connecting to a new tool still requires a developer who knows both systems exist and hardcodes the connection before the agent runs. Scale is capped by developer hours, not by demand.
The registries arrived fast. Smithery indexes 7,000+ MCP servers. PulseMCP tracks 11,840+ daily. mcp.so lists over 19,000 submissions. 104,000+ agents registered across 17+ directories. Nobody expected this volume this quickly.
All of it is built for a developer to browse. An agent can’t query any of it at runtime. Every connection in every deployed agent was wired by a human who found a server somewhere, evaluated it, and added it to a config file. That’s configuration. Configuration isn’t discovery.
The catalog and the market
The Yellow Pages was a catalog. Every business in the phone book, organized by category, browsable by a person who already knew what category to look under. It worked for decades. Google replaced it with something structurally different: describe what you need, get matched to something that fits. The Yellow Pages didn’t die because Google had a better directory. It died because Google turned browsing into matching.
Agent registries are the Yellow Pages. Comprehensive, organized, browsable by a developer with time to look. What agents need at runtime is the other thing: capability matching. “Something that can check freight rates, accepts my payment model, and works with my auth.” That’s semantic, not syntactic. Dynamic, not pre-configured. DNS maps a name to an address. What agents need maps a capability requirement to a provider.
The catalog tells you what exists. The market tells you what fits. Nobody has built the market.
MCP’s 2026 roadmap includes Server Cards, a standard for exposing server metadata at .well-known/mcp.json so registries can catalog capabilities without manual submission. Crawlability and indexing are solved problems. Server Cards close the remaining gap in the catalog layer. They make the Yellow Pages more complete. They don’t turn it into Google.
Why nobody’s built it
The fragmentation in this layer is structural, not accidental.
Cisco’s AGNTCY project—donated to the Linux Foundation in July 2025, backed by Google Cloud, Oracle, and Red Hat—is building agent discovery on an open-source framework with cryptographic identity and a new messaging protocol. GoDaddy launched an Agent Name Service registry in October 2025, based on an IETF draft, with a public API. AWS shipped Agent Registry as part of AgentCore on April 9, 2026, scoped explicitly to an organization’s own agents and MCP servers. It can’t find anything external. At the IETF, eleven competing Internet-Drafts on agent discovery sat unresolved as of Q1 2026. Zero interoperability between approaches.
Each party is building discovery for their own environment. AWS solves it for AWS customers. AGNTCY lays open-source foundation that aligns with its members’ interests. The IETF is writing eleven architectures. The incentive is to own the discovery layer for your users, not to build a shared one. This is the same dynamic that plays out across every infrastructure layer in the agent ecosystem: payments, identity, compute. The shared layer is always the last to arrive, because nobody with market power benefits from building it.
The catch
An agent that can discover services autonomously is also an agent that can be exploited, overcharged, or misdirected. Runtime discovery without constraints is a risk surface. An earlier piece in this series explored this tension for agent connectivity broadly. Every gain in agent autonomy creates a corresponding need for boundaries on that autonomy. Discovery is the same tradeoff. The question isn’t whether agents should discover services freely. It’s who sets the constraints, and what form those constraints take. Guardrails on what an agent can engage, spending limits, category restrictions, trust signals from the discovery layer itself. The protocol that works will need all of this built in, not bolted on.
Who controls distribution
Right now, an agent’s reach is determined before it runs. A developer decided what it could find. Distribution is controlled by whoever did the configuration.
When agents can match a capability need to a provider at runtime—without a human arranging the introduction—the center of gravity shifts. The platform that brokers the match determines what gets used. That’s not an indexing play. It’s a demand-side platform play, the same structural position Google occupied when it sat between intent and destination. Every query that ran through Google was a moment where Google decided what the user found. Every capability match that runs through an agent discovery layer is a moment where that layer decides what the agent reaches.
Whoever builds the market layer for agents doesn’t just fix a gap in the infrastructure. They become the distribution platform for everything agents can do.
Part of the agent-era infrastructure series.