The web got its composition layer. Agents are next.

· 4 min read

Ask an AI agent to book a restaurant, check your calendar, pull a competitor’s pricing, or file an expense. If the developer who built it didn’t wire up that specific service in advance, the agent can’t do it. Not because it lacks the intelligence. Because it was never introduced.

That’s the constraint on agents right now. The limit isn’t reasoning, it’s reach.

The protocols that govern trust, consent, and payment on the internet were all built assuming a human would complete the handshake. OAuth requires a person to click “authorize” in a browser. Terms of service require a person to accept. Payment flows require a person to enter a card. Even finding a new service assumes someone is browsing, following links, typing into search boxes. The human wasn’t a convenience—they were the mechanism. They closed every loop these protocols left open.

Agents break that assumption. There’s no human in the loop to click, accept, browse, or pay. So every agent-to-service connection gets solved the old way: a developer builds it by hand before the agent runs. The developer picks the services, sets up the credentials, and ships. The agent operates inside whatever that developer configured. It can’t discover something new and connect on its own. It can only reach what it was previously pointed at.

MCP, a standard from Anthropic for connecting tools to agents, made this less painful. Instead of each team writing custom integrations, there’s now a shared format. Thousands of connectors appeared in the months after it launched, and the map of what agents can reach grew from near-zero to something useful. But a developer still draws that map. An agent consults it. It doesn’t extend it.

There’s an obvious rebuttal: developer configuration is the right gate for an agent acquiring new capabilities. And for tasks the developer anticipated, that’s true. An agent that can autonomously find and connect to services is also an agent that can spend your money on services nobody vetted. That’s a real concern. But it’s an argument for better constraints on autonomous action, not for requiring a human to wire every connection. The position breaks when the agent’s value is discovering capabilities the developer didn’t know existed.

Before the web had a standard protocol, every network connection to a new host required manual configuration. HTTP changed that. Any browser could reach any server without anyone preconfiguring that specific connection. The browser didn’t need to know a site existed before the user visited it. The protocol handled finding the server and negotiating the exchange. That’s why the web scaled to billions of pages. Browsers didn’t get smarter. The protocol let them connect to anything.

Agents don’t have that yet. The components exist (identity standards, permission models, payment specs) but they don’t compose into a handshake that two software systems can run on first contact, without anyone arranging the meeting.

People are working on the pieces, and the list is getting specific. Google put Agent2Agent under the Linux Foundation with 150-plus organizations behind it for routing and hand-offs between agents. Google’s AP2 protocol, backed by 60-plus partners including MasterCard and PayPal, uses cryptographically signed mandates to prove an agent is allowed to spend on your behalf. The IETF has active drafts for agent discovery (AID, using DNS records) and identity verification (SD Agent, using selective disclosure). The W3C published a finalized standard for machine-readable credentials in March 2026.

But routing doesn’t talk to payments. Payments doesn’t talk to identity, and neither talks to discovery. A developer who wants to assemble a full handshake still wires the pieces together by hand. Same work, better components.

The web solved this problem thirty years ago. Agents still haven’t.

The protocol that changes this doesn’t exist yet. UDDI tried for web services in the early 2000s—a universal registry where machines could discover and connect to services without pre-configuration. IBM, Microsoft, and SAP built public nodes. They shut them down by 2006. The economic pressure wasn’t there when a human could just browse a directory.

That changes when the party seeking the service isn’t a person. The protocol that works would let two software systems meet cold—find each other, confirm who they are, agree on what’s permitted, and settle payment—without a developer in the loop. The ability to navigate without a map.

Once that protocol exists, the developer’s job changes. Instead of wiring connections in advance, they set the boundaries: how much the agent can spend, what categories of service it can engage. The agent operates freely inside those constraints—in the persistent environments it already inhabits. New services become reachable the moment they go live, the way new websites became reachable the moment HTTP gave browsers a way to find them.

Until then, an agent’s reach is exactly as wide as whoever built it decided it should be.

Part of the agent-era infrastructure series.