đŸŠžđŸ»#17: What is A2A and why is it – still! – underappreciated?

Community Article Published May 7, 2025

🔳 everything you need to know about Google’s Agent2Agent protocol (and if Google builds the world’s first agent directory, A2A will be the language it speaks)

One of the biggest challenges in enterprise AI adoption is getting agents built on different frameworks and vendors to work together.

Google, on why agent interoperability matters

Remember the classic example of what we wish AI agents could do smoothly? “Book me a trip to New York next weekend. Prefer a direct flight, leave Friday afternoon, back Sunday evening. And find a hotel close to a good jazz bar.” The problem with that (besides becoming a clichĂ©) is AI agents still struggle to understand your full intent, plan across multiple steps, and act reliably across tools – all without constant hand-holding. Each step (parsing the task, finding options, making tradeoffs, booking) works okay in isolation, but stitching it all together smoothly and safely? That’s still brittle and error-prone. Most agents today operate in silos, each locked into its own ecosystem or vendor. As a result, we have a fragmented landscape where agents can’t directly talk to each other, limiting their usefulness in complex, cross-system workflows. In April 2025, Google unveiled Agent2Agent (A2A) as an open protocol to break these silos. Backed by an all-star roster of over 50 partners (from Atlassian and Salesforce to LangChain). A2A aims to be the “common language” that lets independent AI agents collaborate seamlessly across applications.

Yet even with the loud launch and 50 big-name partners, a few weeks later A2A remains underappreciated. It hasn’t ignited the kind of frenzy one might expect given its pedigree.

image/png The level of popularity on Reddit and the problem of naming 😳

Currently, the trend suggests a slowdown in growth – why such a lukewarm reception for what could be critical infrastructure?

image/png Image Credit: GitHub Star History

In this article, we’ll dive deep into A2A – what it is, why it exists, how it works, what people think about it – and explore why its adoption is lagging (and why that might soon change). We’ll walk through the technical foundation of A2A, compare it to protocols like Anthropic’s MCP, and explain the real-world challenges that come with building multi-agent systems. Along the way, we’ll also look at why Google’s push for agent interoperability could have much bigger implications – possibly even laying the groundwork for a searchable, internet-scale directory of AI agents. As always, it’s a great starting guide, but also useful for those who have already experimented with A2A and want to learn more. Dive in!


📹 Click follow! If you want to receive our articles straight to your inbox, please subscribe here


Follow us on đŸŽ„ YouTube, Twitter and Hugging Face đŸ€—


In today’s episode, we will cover:

Why A2A Isn’t Making Waves (Yet)

Google’s announcement of A2A checked all the right boxes: a compelling vision of cross-agent collaboration, heavyweight partners, open-source code, and even a complementary relationship with Anthropic’s Model Context Protocol (MCP). In theory, the timing is perfect. The AI world is abuzz with “agent” frameworks – but most first-generation “AI agent” stacks have been solo players, single large language models equipped with a toolbox of plugins or APIs. Recently, we saw a tremendous success of MCP that standardizes how an AI agent accesses tools and context, acting as a kind of “USB-C port for AI”. A2A picks up where that leaves off: standardizing how multiple autonomous agents communicate, so they can exchange tasks and results without custom integration glue.

So why hasn’t A2A taken off overnight? Part of the issue is hype dynamics. When Anthropic announced MCP in late 2024, it initially got a tepid response; only months later did it trend as a game-changer. A2A may be experiencing a similar delay in recognition. Its value is a bit abstract at first glance – enterprise agent interoperability isn’t as immediately flashy as, say, a new state-of-the-art model or a chatbot that writes code. Many developers haven’t yet felt the pain of multi-agent collaboration because they’re still experimenting with single-agent applications. In smaller-scale projects, one might simply orchestrate multiple API calls within a single script or use a framework like LangChain internally, without needing a formal protocol. The real urgency of A2A’s solution becomes evident in larger, complex environments – exactly those in big companies – but that story is still filtering out to the broader community.

Another factor is the “yet another standard” fatigue. Over the past year, numerous approaches for extending LLMs have popped up: OpenAI’s function calling, various plugin systems, custom RPC schemes, not to mention vendor-specific agent APIs. Developers might be asking: Do we really need another protocol? Right now, A2A is still so new that there are few public success stories – no killer demo that has gone viral to showcase “agents talking to agents” in a jaw-dropping way. Without that spark, A2A remains under the radar, quietly intriguing to those who read the spec but not yet a buzzword in everyday AI developer chats. (Remember, all links for further learning are included at the end of the article.)

image/png

So, What Is A2A and How Does It Work?

At its core, Agent2Agent (A2A) is a communication protocol that lets independent AI agents speak to each other in a structured, secure way. Concretely, it defines a common set of HTTP-based JSON messages for one agent to request another to perform a task, and to get the result back – potentially with a back-and-forth dialogue if needed. It’s an open standard (open-sourced under Apache license) that any agent framework or vendor can implement, allowing interoperability much like how web browsers and servers share the HTTP/HTML standard.

image/png

Let’s break down the key components of A2A:

At the heart of A2A (Agent-to-Agent communication) is the Agent Card – a public manifest, typically hosted at /.well-known/agent.json, that describes an agent’s capabilities, endpoint URL, and authentication requirements. Think of it like an OpenAPI-style profile: a client agent can fetch another agent’s card and immediately see, for example, "this agent can handle CRM tickets and generate reports," before deciding to send it a task.

A2A defines two flexible roles: Client (requesting agent) and Server (performing agent). Any A2A-compliant agent can fluidly switch roles, enabling flexible topologies like peer-to-peer meshes or hub-and-spoke models.

The core unit of collaboration is a Task. A client creates a Task when asking a remote agent to perform work, using a tasks/send request. Each task has a unique ID and moves through a lifecycle: submitted, working, input-required, completed, or failed. Both agents track task progress together, supporting richer, multi-step interactions like clarification questions or partial deliveries.

Messages and their Parts structure the dialogue. A Message could be a user request, a status update, or a follow-up, and each Message is made of Parts – chunks of content that might be plain text, structured JSON data, or binary files like images or PDFs. This design supports multimodal communication: for instance, an agent might send a form (structured data) instead of just text, making exchanges far more dynamic.

When a task is complete, the output is packaged as an Artifact, which is also built from Parts. Artifacts are durable, structured results – a PDF report, a JSON dataset, an image – that other agents can immediately reuse in new tasks without extra parsing.

A2A also supports Streaming and Notifications. Long-running tasks can stream live updates via Server-Sent Events (SSE), letting the client agent subscribe to task progress. Agents can even push updates directly to a client’s webhook, allowing asynchronous architectures to integrate cleanly.

Under the hood, A2A is built entirely on familiar web standards: simple HTTP requests with JSON-RPC 2.0 payloads, SSE for streaming, and typical REST API authentication methods like OAuth 2.0, mutual TLS, or signed JWTs. There’s no exotic transport layer or custom binary encoding – just pragmatic choices that make enterprise adoption much easier.

A typical A2A interaction might look like this: Agent Alpha (client) needs a sales chart. It fetches Agent Beta’s (server) Agent Card and sees a “create_chart” capability. Alpha sends a task: “Generate a bar chart of sales by region for Q1.” Beta acknowledges the task and streams progress updates as it works. If Beta needs more details – say, clarification on which regions – it sends an input-required message, and Alpha replies. When the chart is ready, Beta returns the final artifact (an image file), which Alpha can use or hand off to another agent.

Because Beta’s skills were openly discoverable and invoked through a uniform JSON protocol, this whole interaction avoided the usual brittle API handoffs and custom glue code. That’s the promise of A2A: making agent collaboration as seamless as calling a REST endpoint.

How do I actually get started with A2A? (the most basic way)

What you'll need:

  • A code editor such as Visual Studio Code (VS Code)
  • A command prompt such as Terminal (Linux), iTerm (Mac) or just the Terminal in VS Code

What to do:

  1. Clone the repo git clone https://github.com/google/A2A && cd A2A – it contains the spec, a Python SDK, and runnable demo agents.

  2. Spin up two sample agents

cd samples/python/agents/hello_world
python server.py --port 5001      # “remote” agent
cd ../cli
python main.py --task "say_hi"    # “client” agent

You’ll see the task lifecycle messages flow over HTTP exactly as the spec describes.

  1. Give your agent an identity Drop a file called /.well-known/a2a.json (the Agent Card) alongside your service. It’s a single-page rĂ©sumĂ©: name, description, auth method, and – most important – the capabilities array that advertises which tasks you can handle.
  2. Wrap your own logic If you already have an agent in LangChain, CrewAI, or plain Python, install the tiny adapter and expose it:
pip install python-a2a
from python_a2a.langchain import to_a2a_server
a2a_server = to_a2a_server(my_langchain_agent)
a2a_server.run(port=5000)
``` :contentReference[oaicite:3]{index=3}

Everything after that – security, registries, observability – is just normal micro-service hygiene, only now your “micro-service” speaks in tasks and artifacts instead of REST verbs.

Before A2A: The Fragmented World of Isolated Agents

Before A2A, most AI workflows revolved around a single “uber-agent” orchestrating a stack of tools – hence MCP’s rise – so genuine agent-to-agent hand-offs were rare. Attempts at multi-agent collaboration were improvised: brittle natural-language chats or vendor-locked frameworks that couldn’t mix Microsoft, Google, and open-source agents without heavy glue code. With no common way to discover peers, pass tasks, or reference artifacts, every handshake was bespoke – and fragmentation quickly became a scaling nightmare. (Academic protocols like KQML and FIPA-ACL tried to solve this in the ’90s, but they never crossed over into today’s LLM world.)

image/png

Is A2A a Silver Bullet for AI Collaboration? + Challenges

With all this promise, it’s important to ask: Does A2A solve everything? Of course not. Much like MCP or any new technology, A2A comes with its own set of challenges and is not a cure-all for multi-agent systems. It’s better to think of A2A as a powerful enabler – an integration layer that can make previously impossible workflows feasible – but not a guarantee of success on its own.

One important limitation: A2A doesn’t make agents smarter – it just makes it easier for them to talk. If you connect two mediocre agents, you don’t get brilliance; you risk endless task-passing with no progress. Effective collaboration still needs careful design: deciding who tackles what, ensuring shared goals, handling failures. In short, A2A doesn’t eliminate the need for orchestration intelligence; it simply makes the communications in that orchestration standardized.

Adopting A2A also introduces operational overhead. Each agent becomes a service (with an HTTP endpoint or embedded server), which means managing a mesh of agents: HR on Workday, sales on Salesforce, custom Python analytics – all needing discovery, authentication, monitoring, and resilience. It’s microservices all over again. For small workflows, a simple script might be easier. Like MCP, which only pays off when you have many tools and contexts, A2A only pays off when you’re stitching together many agents with diverse capabilities.

A2A’s spec is brand new (technically still a draft), and will likely evolve. Implementers should expect breaking changes, edge-case bugs, and a moving target. As with any new protocol, early adopters play the role of testers too. If you’re not ready to stay active in the community, that may be a dealbreaker.

Compatibility is another hurdle. A2A gets more valuable the more agents and vendors support it. Without critical mass, you might be better off using native APIs. Google has rallied an impressive group of enterprise partners, but key players like OpenAI and Microsoft haven’t publicly endorsed A2A (Microsoft’s Semantic Kernel blog shows an experimental A2A adapter, but that’s an SDK team experiment – not a formal endorsement). Anthropic’s MCP is complementary, but not the same. If we end up in a protocol war – A2A vs. MCP vs. something else – developers could get stuck building adapters or falling back to brittle integrations.

Security is another frontier. A2A includes token auth and TLS, but real-world policies, credentials, and audits are left to users. Enterprises will likely need “agent gateways” – the equivalent of API gateways – to manage trust between agents.

None of this is a dealbreaker. It’s just what new infrastructure looks like. Microservices took years to mature. A2A will too. It's not a silver bullet – it’s protocol glue. But with the right expectations and pilot projects, it can solve communication problems in multi-agent systems. The rest still depends on good design, thoughtful implementation, and a community willing to push it forward.

Will MCP and A2A Become Competitors?

The short answer is no. While some argue that competition might emerge if A2A starts absorbing functionalities traditionally covered by MCP – particularly if companies begin modeling their data and services primarily as independent agents rather than mere tools or resources – I find this scenario highly unlikely.

Tools and resources will always remain distinct and essential building blocks in agentic systems, and the space for their integration is expansive enough to comfortably accommodate both protocols. MCP excels at standardizing interactions between LLMs and external data sources or tools, while A2A addresses secure, stateful inter-agent communication. Given their complementary strengths and the sheer scale of the agentic ecosystem, both MCP and A2A will coexist, each finding its clearly defined and valuable role.

A2A in Agentic Orchestration and Its Place in the AI Stack

Where does A2A fit into the emerging AI infrastructure stack? To answer that, picture the layers involved in turning raw AI models into useful autonomous agents. In previous discussions, we’ve broken down the components of agentic systems – things like memory, reasoning, and tool use. A2A doesn’t try to solve all of those; it slots in as a communication and coordination layer.

First, consider a single agent. It typically consists of a core model (like an LLM), behavior logic (via prompting or planning), and mechanisms for interfacing with the outside world (tools, APIs). Frameworks like LangChain, Semantic Kernel, and Google’s Agent Development Kit (ADK) help manage these parts. MCP, as we’ve covered before, standardizes how agents plug into external tools.

A2A sits one level higher: agent-to-agent. If MCP and tool APIs enable an agent to act on the world, A2A enables agents to act on each other. They’re complementary, not competitors. In fact, they often combine. An agent’s A2A Card could advertise capabilities internally powered by MCP – for example, an "Invoice Processing Agent" could offer to extract invoice fields (via A2A) while using OCR tools (via MCP) under the hood. A2A orchestrates multi-agent workflows, while leaving internal tool management to each agent.

Stacking it up:

  • LLM and Reasoning – Core intelligence and decision-making logic.
  • Tool/Context Interface (MCP, Plugins) – Lets agents use external tools and data.
  • Agent Framework / Runtime – Manages agent loops, memory, and task splitting.
  • Inter-Agent Protocol (A2A) – Allows agents to coordinate and delegate across systems.
  • Orchestrator / Manager – (Optional) Supervisory logic that decides when to invoke other agents.

Importantly, A2A doesn’t replace LangChain, Semantic Kernel, or other frameworks – it lets them interoperate. The A2A GitHub repo already includes adapters for LangChain, LlamaIndex, Marvin, Semantic Kernel, and more. A LangChain agent and a Semantic Kernel agent can now collaborate without custom glue.

It’s a familiar pattern: in early web development, apps couldn't talk until protocols like REST standardized the way. Now, A2A tries to do the same for AI agents.

Finally, while OpenAI’s function calling allows intra-agent tool use, A2A enables inter-agent cooperation. They serve different scopes and will likely coexist in sophisticated systems.

If it succeeds, A2A will become the lingua franca of multi-agent workflows – a quiet but critical enabler of the next phase of AI infrastructure.

image/png

New Possibilities Unlocked by A2A

Because A2A hasn’t hit mainstream awareness yet, many are underestimating the kinds of workflows and collaborations it makes possible. Let’s look at a few.

  • Specialist Agents Working as a Team: Instead of building one giant agent to handle everything, A2A allows teams of specialized agents to collaborate dynamically. In customer support, for instance, a tech troubleshooting agent, a finance agent, and a promotions agent could seamlessly hand off tasks in one session, mirroring how human teams operate. The user interacts with just one front, while agents collaborate in the background.
  • Cross-Enterprise Workflows: Enterprises run on many platforms – Salesforce for CRM, ServiceNow for IT, Workday for HR. With A2A, an HR agent could request IT to provision a laptop without manual IT tickets or brittle API glue code. Each agent stays excellent in its own domain but can plug into larger, cross-company workflows through the protocol.
  • Dynamic Agent Swapping and Upgrading: A2A standardizes capabilities, opening the door to modularity. You could swap out an open-source summarization agent for a commercial one without changing how it’s called. Over time, this could lead to a marketplace of interoperable agents – hire a legal analysis agent, a translator, a market researcher – all speaking A2A.
  • Human-in-the-Loop Oversight: Not all agents have to be AI. Humans could participate through A2A clients too – approving or modifying AI-suggested tasks, or monitoring sensitive interactions between agents. This formalized handoff and oversight becomes easier with standardized artifacts and task states.
  • Federated Agents Across Organizations: Looking further out, A2A could enable agents from different companies to collaborate securely, exchanging tasks across organizational boundaries. Supply chain agents negotiating inventory, cross-company R&D teams – all possible once trust layers and agreements are in place.

Many of these setups were either infeasible or extremely costly before. Ironically, while the AI world chases bigger models and fancier prompts, it might be humble plumbing like A2A that unlocks qualitatively new capabilities. By making agentic collaboration modular, secure, and plug-and-play, A2A lowers the barrier to innovation – and we’re just starting to see what’s possible.

Concluding Thoughts – Could Google spin A2A into a public, Google-search-style index of agents?

Technically, yes. The spec already requires every compliant agent to publish a machine-readable Agent Card at /.well-known/agent.json. That’s the perfect hook for a crawler: just follow URLs, fetch the card, and drop the metadata into Bigtable – exactly the pattern Google used when it turned robots.txt + sitemap.xml into a web index. Today that discovery step is peer-to-peer, but nothing prevents Googlebot-for-Agents from doing the same job at internet scale.

Early signals inside Google Cloud show the appetite. Agentspace’s Agent Gallery is already a gated, searchable catalog for enterprise customers; it lists Google-built, partner and in-house agents, and it taps Cloud Marketplace for distribution. That’s a miniature App Store for agents – just minus the public crawler.

image/png

A2A lays the pipes; whether Google turns those pipes into the world’s switchboard is still an open question. Still, A2A is early. Its under-the-radar status today recalls the first murmurs around containers, Kubernetes, and REST APIs – quiet starts that erupted once the ecosystem tipped. If Mountain View does roll out a public agent index, it could become the DNS of autonomous software – potent, profitable, and politically radioactive. Until that future crystallizes, pilot A2A, track rival specs, and stay nimble. Infrastructure wins hinge less on brilliance than on trust, incentives, and a thousand quiet integration stories. Watch those stories.

Author: Ksenia Se

Resources to Dive Deeper:

  • Announcing the Agent2Agent Protocol (Google Blog)
  • Official A2A Specification (GitHub)
  • A2A Protocol Documentation (docs)
  • A2A Python Quickstart Tutorial
  • Python A2A (GitHub)
  • Awesome A2A (GitHub)
  • LlamaIndex File Chat Workflow with A2A Protocol (GitHub)
  • A2A Directory with community implementations (GitHub)
  • Building the industry’s best agentic AI ecosystem with partners (Google Blog)
  • Aravind Putrevu’s Agent2Agent Protocol Explained (blog)
  • Building A Secure Agentic AI Application Leveraging Google’s A2A Protocol (research paper)
  • A Survey of AI Agent Protocols (research paper)

Sources from Turing Post


📹 If you want to receive our articles straight to your inbox, please subscribe here


Community

Sign up or log in to comment