Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

The Architect’s Dilemma

13 October 2025 at 07:22

The agentic AI landscape is exploding. Every new framework, demo, and announcement promises to let your AI assistant book flights, query databases, and manage calendars. This rapid advancement of capabilities is thrilling for users, but for the architects and engineers building these systems, it poses a fundamental question: When should a new capability be a simple, predictable tool (exposed via the Model Context Protocol, MCP) and when should it be a sophisticated, collaborative agent (exposed via the Agent2Agent Protocol, A2A)?

The common advice is often circular and unhelpful: “Use MCP for tools and A2A for agents.” This is like telling a traveler that cars use motorways and trains use tracks, without offering any guidance on which is better for a specific journey. This lack of a clear mental model leads to architectural guesswork. Teams build complex conversational interfaces for tasks that demand rigid predictability, or they expose rigid APIs to users who desperately need guidance. The outcome is often the same: a system that looks great in demos but falls apart in the real world.

In this article, I argue that the answer isn’t found by analyzing your service’s internal logic or technology stack. It’s found by looking outward and asking a single, fundamental question: Who is calling your product/service? By reframing the problem this way—as a user experience challenge first and a technical one second—the architect’s dilemma evaporates.

This essay draws a line where it matters for architects: the line between MCP tools and A2A agents. I will introduce a clear framework, built around the “Vending Machine Versus Concierge” model, to help you choose the right interface based on your consumer’s needs. I will also explore failure modes, testing, and the powerful Gatekeeper Pattern that shows how these two interfaces can work together to create systems that are not just clever but truly reliable.

Two Very Different Interfaces

MCP presents tools—named operations with declared inputs and outputs. The caller (a person, program, or agent) must already know what it wants, and provide a complete payload. The tool validates, executes once, and returns a result. If your mental image is a vending machine—insert a well-formed request, get a deterministic response—you’re close enough.

A2A presents agents—goal-first collaborators that converse, plan, and act across turns. The caller expresses an outcome (“book a refundable flight under $450”), not an argument list. The agent asks clarifying questions, calls tools as needed, and holds onto session state until the job is done. If you picture a concierge—interacting, negotiating trade-offs, and occasionally escalating—you’re in the right neighborhood.

Neither interface is “better.” They are optimized for different situations:

  • MCP is fast to reason about, easy to test, and strong on determinism and auditability.
  • A2A is built for ambiguity, long-running processes, and preference capture.

Bringing the Interfaces to Life: A Booking Example

To see the difference in practice, let’s imagine a simple task: booking a specific meeting room in an office.

The MCP “vending machine” expects a perfectly structured, machine-readable request for its book_room_tool. The caller must provide all necessary information in a single, valid payload:

{
  "jsonrpc": "2.0",
  "id": 42,
  "method": "tools/call",
  "params": {
    "name": "book_room_tool",
    "arguments": {
      "room_id": "CR-104B",
      "start_time": "2025-11-05T14:00:00Z",
      "end_time": "2025-11-05T15:00:00Z",
      "organizer": "user@example.com"
    }
  }
}

Any deviation—a missing field or incorrect data type—results in an immediate error. This is the vending machine: You provide the exact code of the item you want (e.g., “D4”) or you get nothing.

The A2A “concierge, an “office assistant” agent, is approached with a high-level, ambiguous goal. It uses conversation to resolve ambiguity:

User: “Hey, can you book a room for my 1-on-1 with Alex tomorrow afternoon?”
Agent: “Of course. To make sure I get the right one, what time works best, and how long will you need it for?”

The agent’s job is to take the ambiguous goal, gather the necessary details, and then likely call the MCP tool behind the scenes once it has a complete, valid set of arguments.

With this clear dichotomy established—the predictable vending machine (MCP) versus the stateful concierge (A2A)—how do we choose? As I argued in the introduction, the answer isn’t found in your tech stack. It’s found by asking the most important architectural question of all: Who is calling your service?

Step 1: Identify your consumer

  1. The machine consumer: A need for predictability
    Is your service going to be called by another automated system, a script, or another agent acting in a purely deterministic capacity? This consumer requires absolute predictability. It needs a rigid, unambiguous contract that can be scripted and relied upon to behave the same way every single time. It cannot handle a clarifying question or an unexpected update; any deviation from the strict contract is a failure. This consumer doesn’t want a conversation; it needs a vending machine. This nonnegotiable requirement for a predictable, stateless, and transactional interface points directly to designing your service as a tool (MCP).
  2. The human (or agentic) consumer: A need for convenience
    Is your service being built for a human end user or for a sophisticated AI that’s trying to fulfill a complex, high-level goal? This consumer values convenience and the offloading of cognitive load. They don’t want to specify every step of a process; they want to delegate ownership of a goal and trust that it will be handled. They’re comfortable with ambiguity because they expect the service—the agent—to resolve it on their behalf. This consumer doesn’t want to follow a rigid script; they need a concierge. This requirement for a stateful, goal-oriented, and conversational interface points directly to designing your service as an agent (A2A).

By starting with the consumer, the architect’s dilemma often evaporates. Before you ever debate statefulness or determinism, you first define the user experience you are obligated to provide. In most cases, identifying your customer will give you your definitive answer.

Step 2: Validate with the four factors

Once you have identified who calls your service, you have a strong hypothesis for your design. A machine consumer points to a tool; a human or agentic consumer points to an agent. The next step is to validate this hypothesis with a technical litmus test. This framework gives you the vocabulary to justify your choice and ensure the underlying architecture matches the user experience you intend to create.

  1. Determinism versus ambiguity
    Does your service require a precise, unambiguous input, or is it designed to interpret and resolve ambiguous goals? A vending machine is deterministic. Its API is rigid: GET /item/D4. Any other request is an error. This is the world of MCP, where a strict schema ensures predictable interactions. A concierge handles ambiguity. “Find me a nice place for dinner” is a valid request that the agent is expected to clarify and execute. This is the world of A2A, where a conversational flow allows for clarification and negotiation.
  2. Simple execution versus complex process
    Is the interaction a single, one-shot execution, or a long-running, multistep process? A vending machine performs a short-lived execution. The entire operation—from payment to dispensing—is an atomic transaction that is over in seconds. This aligns with the synchronous-style, one-shot model of MCP. A concierge manages a process. Booking a full travel itinerary might take hours or even days, with multiple updates along the way. This requires the asynchronous, stateful nature of A2A, which can handle long-running tasks gracefully.
  3. Stateless versus stateful
    Does each request stand alone or does the service need to remember the context of previous interactions? A vending machine is stateless. It doesn’t remember that you bought a candy bar five minutes ago. Each transaction is a blank slate. MCP is designed for these self-contained, stateless calls. A concierge is stateful. It remembers your preferences, the details of your ongoing request, and the history of your conversation. A2A is built for this, using concepts like a session or thread ID to maintain context.
  4. Direct control versus delegated ownership
    Is the consumer orchestrating every step, or are they delegating the entire goal? When using a vending machine, the consumer is in direct control. You are the orchestrator, deciding which button to press and when. With MCP, the calling application retains full control, making a series of precise function calls to achieve its own goal. With a concierge, you delegate ownership. You hand over the high-level goal and trust the agent to manage the details. This is the core model of A2A, where the consumer offloads the cognitive load and trusts the agent to deliver the outcome.
FactorTool (MCP)Agent (A2A)Key question
DeterminismStrict schema; errors on deviationClarifies ambiguity via dialogueCan inputs be fully specified up front?
ProcessOne-shotMulti-step/long-runningIs this atomic or a workflow?
StateStatelessStateful/sessionfulMust we remember context/preferences?
ControlCaller orchestratesOwnership delegatedWho drives: the caller or callee?

Table 1: Four question framework

These factors are not independent checkboxes; they are four facets of the same core principle. A service that is deterministic, transactional, stateless, and directly controlled is a tool. A service that handles ambiguity, manages a process, maintains state, and takes ownership is an agent. By using this framework, you can confidently validate that the technical architecture of your service aligns perfectly with the needs of your customer.

No framework, no matter how clear…

…can perfectly capture the messiness of the real world. While the “Vending Machine Versus Concierge” model provides a robust guide, architects will eventually encounter services that seem to blur the lines. The key is to remember the core principle we’ve established: The choice is dictated by the consumer’s experience, not the service’s internal complexity.

Let’s explore two common edge cases.

The complex tool: The iceberg
Consider a service that performs a highly complex, multistep internal process, like a video transcoding API. A consumer sends a video file and a desired output format. This is a simple, predictable request. But internally, this one call might kick off a massive, long-running workflow involving multiple machines, quality checks, and encoding steps. It’s a hugely complex process.

However, from the consumer’s perspective, none of that matters. They made a single, stateless, fire-and-forget call. They don’t need to manage the process; they just need a predictable result. This service is like an iceberg: 90% of its complexity is hidden beneath the surface. But because its external contract is that of a vending machine—a simple, deterministic, one-shot transaction—it is, and should be, implemented as a tool (MCP).

The simple agent: The scripted conversation
Now consider the opposite: a service with very simple internal logic that still requires a conversational interface. Imagine a chatbot for booking a dentist appointment. The internal logic might be a simple state machine: ask for a date, then a time, then a patient name. It’s not “intelligent” or particularly flexible.

However, it must remember the user’s previous answers to complete the booking. It’s an inherently stateful, multiturn interaction. The consumer cannot provide all the required information in a single, prevalidated call. They need to be guided through the process. Despite its internal simplicity, the need for a stateful dialogue makes it a concierge. It must be implemented as an agent (A2A) because its consumer-facing experience is that of a conversation, however scripted.

These gray areas reinforce the framework’s central lesson. Don’t get distracted by what your service does internally. Focus on the experience it provides externally. That contract with your customer is the ultimate arbiter in the architect’s dilemma.

Testing What Matters: Different Strategies for Different Interfaces

A service’s interface doesn’t just dictate its design; it dictates how you validate its correctness. Vending machines and concierges have fundamentally different failure modes and require different testing strategies.

Testing MCP tools (vending machines):

  • Contract testing: Validate that inputs and outputs strictly adhere to the defined schema.
  • Idempotency tests: Ensure that calling the tool multiple times with the same inputs produces the same result without side effects.
  • Deterministic logic tests: Use standard unit and integration tests with fixed inputs and expected outputs.
  • Adversarial fuzzing: Test for security vulnerabilities by providing malformed or unexpected arguments.

Testing A2A agents (concierges):

  • Goal completion rate (GCR): Measure the percentage of conversations where the agent successfully achieved the user’s high-level goal.
  • Conversational efficiency: Track the number of turns or clarifications required to complete a task.
  • Tool selection accuracy: For complex agents, verify that the right MCP tool was chosen for a given user request.
  • Conversation replay testing: Use logs of real user interactions as a regression suite to ensure updates don’t break existing conversational flows.

The Gatekeeper Pattern

Our journey so far has focused on a dichotomy: MCP or A2A, vending machine or concierge. But the most sophisticated and robust agentic systems do not force a choice. Instead, they recognize that these two protocols don’t compete with each other; they complement each other. The ultimate power lies in using them together, with each playing to its strengths.

The most effective way to achieve this is through a powerful architectural choice we can call the Gatekeeper Pattern.

In this pattern, a single, stateful A2A agent acts as the primary, user-facing entry point—the concierge. Behind this gatekeeper sits a collection of discrete, stateless MCP tools—the vending machines. The A2A agent takes on the complex, messy work of understanding a high-level goal, managing the conversation, and maintaining state. It then acts as an intelligent orchestrator, making precise, one-shot calls to the appropriate MCP tools to execute specific tasks.

Consider a travel agent. A user interacts with it via A2A, giving it a high-level goal: “Plan a business trip to London for next week.”

  • The travel agent (A2A) accepts this ambiguous request and starts a conversation to gather details (exact dates, budget, etc.).
  • Once it has the necessary information, it calls a flight_search_tool (MCP) with precise arguments like origin, destination, and date.
  • It then calls a hotel_booking_tool (MCP) with the required city, check_in_date, and room_type.
  • Finally, it might call a currency_converter_tool (MCP) to provide expense estimates.

Each tool is a simple, reliable, and stateless vending machine. The A2A agent is the smart concierge that knows which buttons to press and in what order. This pattern provides several significant architectural benefits:

  • Decoupling: It separates the complex, conversational logic (the “how”) from the simple, reusable business logic (the “what”). The tools can be developed, tested, and maintained independently.
  • Centralized governance: The A2A gatekeeper is the perfect place to implement cross-cutting concerns. It can handle authentication, enforce rate limits, manage user quotas, and log all activity before a single tool is ever invoked.
  • Simplified tool design: Because the tools are just simple MCP functions, they don’t need to worry about state or conversational context. Their job is to do one thing and do it well, making them incredibly robust.

Making the Gatekeeper Production-Ready

Beyond its design benefits, the Gatekeeper Pattern is the ideal place to implement the operational guardrails required to run a reliable agentic system in production.

  • Observability: Each A2A conversation generates a unique trace ID. This ID must be propagated to every downstream MCP tool call, allowing you to trace a single user request across the entire system. Structured logs for tool inputs and outputs (with PII redacted) are critical for debugging.
  • Guardrails and security: The A2A Gatekeeper acts as a single point of enforcement for critical policies. It handles authentication and authorization for the user, enforces rate limits and usage quotas, and can maintain a list of which tools a particular user or group is allowed to call.
  • Resilience and fallbacks: The Gatekeeper must gracefully manage failure. When it calls an MCP tool, it should implement patterns like timeouts, retries with exponential backoff, and circuit breakers. Critically, it is responsible for the final failure state—escalating to a human in the loop for review or clearly communicating the issue to the end user.

The Gatekeeper Pattern is the ultimate synthesis of our framework. It uses A2A for what it does best—managing a stateful, goal-oriented process—and MCP for what it was designed for—the reliable, deterministic execution of a task.

Conclusion

We began this journey with a simple but frustrating problem: the architect’s dilemma. Faced with the circular advice that “MCP is for tools and A2A is for agents,” we were left in the same position as a traveler trying to get to Edinburgh—knowing that cars use motorways and trains use tracks but with no intuition on which to choose for our specific journey.

The goal was to build that intuition. We did this not by accepting abstract labels, but by reasoning from first principles. We dissected the protocols themselves, revealing how their core mechanics inevitably lead to two distinct service profiles: the predictable, one-shot “vending machine” and the stateful, conversational “concierge.”

With that foundation, we established a clear, two-step framework for a confident design choice:

  1. Start with your customer. The most critical question is not a technical one but an experiential one. A machine consumer needs the predictability of a vending machine (MCP). A human or agentic consumer needs the convenience of a concierge (A2A).
  2. Validate with the four factors. Use the litmus test of determinism, process, state, and ownership to technically justify and solidify your choice.

Ultimately, the most robust systems will synthesize both, using the Gatekeeper Pattern to combine the strengths of a user-facing A2A agent with a suite of reliable MCP tools.

The choice is no longer a dilemma. By focusing on the consumer’s needs and understanding the fundamental nature of the protocols, architects can move from confusion to confidence, designing agentic ecosystems that are not just functional but also intuitive, scalable, and maintainable.

Generative AI in the Real World: Understanding A2A with Heiko Hotz and Sokratis Kartakis

Everyone is talking about agents: single agents and, increasingly, multi-agent systems. What kind of applications will we build with agents, and how will we build with them? How will agents communicate with each other effectively? Why do we need a protocol like A2A to specify how they communicate? Join Ben Lorica as he talks with Heiko Hotz and Sokratis Kartakis about A2A and our agentic future.

About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2025, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

Check out other episodes of this podcast on the O’Reilly learning platform.

Timestamps

  • 0:00: Intro to Heiko and Sokratis.
  • 0:24: It feels like we’re in a Cambrian explosion of frameworks. Why agent-to-agent communication? Some people might think we should focus on single-agent tooling first.
  • 0:53: Many developers start developing agents with completely different frameworks. At some point they want to link the agents together. One way is to change the code of your application. But it would be easier if you could get the agents talking the same language. 
  • 1:43: Was A2A something developers approached you for?
  • 1:53: It is fair to say that A2A is a forward-looking protocol. We see a future where one team develops an agent that does something and another team in the same organization or even outside would like to leverage that capability. An agent is very different from an API. In the past, this was done via API. With agents, I need a stateful protocol where I send a task and the agent can run asynchronously in the background and do what it needs to do. That’s the justification for the A2A protocol. No one has explicitly asked for this, but we will be there in a few months time. 
  • 3:55: For developers in this space, the most familiar is MCP, which is a single agent protocol focused on external tool integration. What is the relationship between MCP and A2A?
  • 4:26: We believe that MCP and A2A will be complementary and not rivals. MCP is specific to tools, and A2A connects agents with each other. That brings us to the question of when to wrap a functionality in a tool versus an agent. If we look at the technical implementation, that gives us some hints when to use each. An MCP tool exposes its capability by a structured schema: I need input A and B and I give you the sum. I can’t deviate from the schema. It’s also a single interaction. If I wrap the same functionality into an agent, the way I expose the functionality is different. A2A expects a natural language description of the agent’s functionality: “The agent adds two numbers.” Also, A2A is stateful. I send a request and get a result. That gives developers a hint on when to use an agent and when to use a tool. I like to use the analogy of a vending machine versus a concierge. I put money into a vending machine and push a button and get something out. I talk to a concierge and say, “I’m thirsty; buy me something to drink.”
  • 7:09: Maybe we can help our listeners make the notion of A2A even more concrete. I tell nonexperts that you’re already using an agent to some extent. Deep research is an agent. I talk to people building AI tools in finance, and I have a notion that I want to research, but I have one agent looking at earnings, another looking at other data. Do you have a canonical example you use?
  • 8:13: We can parallelize A2A with real business. Imagine separate agents that are different employees with different skills. They have their own business cards. They share the business cards with the clients. The client can understand what tasks they want to do: learn about stocks, learn about investments. So I call the right agent or server to get a specialized answer back. Each agent has a business card that describes its skills and capabilities. I can talk to the agent with live streaming or send it messages. You need to define how you communicate with the agent. And you need to define the security method you will use to exchange messages.
  • 9:45: Late last year, people started talking about single agents. But people were already talking about what the agent stack would be: memory, storage, observability, and so on. Now that you are talking about multi-agents or A2A, are there important things that need to be introduced to the agentic stack?
  • 10:32: You would still have the same. You’d arguably need more. Statefulness, memory, access to tools.
  • 10:48: Is that going to be like a shared memory across agents?
  • 10:52: It all depends on the architecture. The way I imagine a vanilla architecture, the user speaks to a router agent, which is the primary contact of the user with the system. That router agent does very simple things like saying “hello.” But once the user asks the system “Book me a holiday to Paris,” there are many steps involved. (No agent can do this yet). The capabilities are getting better and better. But the way I imagine it is that the router agent is the boss, and two or three remote agents do different things. One finds flights; one books hotels; one books cars—they all need information from each other. The router agent would hold the context for all of those. If you build it all within one agentic framework, it becomes even easier because those frameworks have the concepts of shared memory built in. But it’s not necessarily needed. If the hotel booking agent is built in LangChain and from a different team than the flight booking agent, the router agent would decide what information is needed.
  • 13:28: What you just said is the argument for why you need these protocols. Your example is the canonical simple example. What if my trip involves four different countries? I might need a hotel agent for every country. Because hotels might need to be specialized for local knowledge.
  • 14:12: Technically, you might not need to change agents. You need to change the data—what agent has access to what data. 
  • 14:29: We need to parallelize single agents with multi-agent systems; we move from a monolithic application to microservices that have small, dedicated agents to perform specific tasks. This has many benefits. It also makes the life of the developer easier because you can test, you can evaluate, you can perform checks before moving to production. Imagine that you gave a human 100 tools to perform a task. The human will get confused. It’s the same for agents. You need small agents with specific terms to perform the right task. 
  • 15:31: Heiko’s example drives home why something like MCP may not be enough. If you have a master agent and all it does is integrate with external sites, but the integration is not smart—if the other side has an agent, that agent could be thinking as well. While agent-to-agent is something of a science fiction at the moment, it does make sense moving forward.
  • 16:11: Coming back to Sokratis’s thought, when you give an agent too many tools and make it try to do too many things, it just becomes more and more likely that by reasoning through these tools, it will pick the wrong tool. That gets us to evaluation and fault tolerance. 
  • 16:52: At some point we might see multi-agent systems communicate with other multi-agent systems—an agent mesh.
  • 17:05: In the scenario of this hotel booking, each of the smaller agents would use their own local model. They wouldn’t all rely on a central model. Almost all frameworks allow you to choose the right model for the right task. If a task is simple but still requires an LLM, a small open source model could be sufficient. If the task requires heavy “brain” power, you might want to use Gemini 2.5 Pro.
  • 18:07: Sokratis brought up the word security. One of the earlier attacks against MCP is a scenario when an attacker buries instructions in the system prompt of the MCP server or its metadata, which then gets sent into the model. In this case, you have smaller agents, but something may happen to the smaller agents. What attack scenarios worry you at this point?
  • 19:02: There are many levels at which something might go wrong. With a single agent, you have to implement guardrails before and after each call to an LLM or agent.
  • 19:24: In a single agent, there is one model. Now each agent is using its own model. 
  • 19:35: And this makes the evaluation and security guardrails even more problematic. From A2A’s side, it supports all the different security types to authenticate agents, like API keys, HTTP authentication, OAuth 2. Within the agent card, the agent can define what you need to use to use the agent. Then you need to think of this as a service possibility. It’s not just a responsibility of the protocol. It’s the responsibility of the developer.
  • 20:29: It’s equivalent to right now with MCP. There are thousands of MCP servers. How do I know which to trust? But at the same time, there are thousands of Python packages. I have to figure out which to trust. At some level, some vetting needs to be done before you trust another agent. Is that right?
  • 21:00: I would think so. There’s a great article: “The S in MCP Stands for Security.” We can’t speak as much to the MCP protocol, but I do believe there have been efforts to implement authentication methods and address security concerns, because this is the number one question enterprises will ask. Without proper authentication and security, you will not have adoption in enterprises, which means you will not have adoption at all. WIth A2A, these concerns were addressed head-on because the A2A team understood that to get any chance of traction, built in security was priority 0. 
  • 22:25: Are you familiar with the buzzword “large action models”? The notion that your model is now multimodal and can look at screens and environment states.
  • 22:51: Within DeepMind, we have Project Mariner, which leverages Gemini’s capabilities to ask on your behalf about your computer screen.
  • 23:06: It makes sense that it’s something you want to avoid if you can. If you can do things in a headless way, why do you want to pretend you’re human? If there’s an API or integration, you would go for that. But the reality is that many tools knowledge workers use may not have these features yet. How does that impact how we build agent security? Now that people might start building agents to act like knowledge workers using screens?
  • 23:45: I spoke with a bank in the UK yesterday, and they were very clear that they need to have complete observability on agents, even if that means slowing down the process. Because of regulation, they need to be able to explain every request that went to the LLM, and every action that followed from that. I believe observability is the key in this setup, where you just cannot tolerate any errors. Because it is LLM-based, there will still be errors. But in a bank you must at least be in a position to explain exactly what happened.
  • 24:45: With most customers, whenever there’s an agentic solution, they need to share that they are using an agentic solution and the way [they] are using it is X, Y, and Z. A legal agreement is required to use the agent. The customer needs to be clear about this. There are other scenarios like UI testing where, as a developer, I want an agent to start using my machine. Or an elder who is connected with customer support of a telco to fix a router. This is impossible for a nontechnical person to achieve. The fear is there, like nuclear energy, which can be used in two different ways. It’s the same with agents and GenAI. 
  • 26:08: A2A is a protocol. As a protocol, there’s only so much you can do on the security front. At some level, that’s the responsibility of the developers. I may want to signal that my agent is secure because I’ve hired a third party to do penetration testing. Is there a way for the protocol to embed knowledge about the extra step?
  • 27:00: A protocol can’t handle all the different cases. That’s why A2A created the notion of extensions. You can extend the data structure and also the methods or the profile. Within this profile, you can say, “I want all the agents to use this encryption.” And with that, you can tell all your systems to use the same patterns. You create the extension once, you adopt that for all the A2A compatible agents, and it’s ready. 
  • 27:51: For our listeners who haven’t opened the protocol, how easy is it? Is it like REST or RPC?
  • 28:05: I personally learned it within half a day. For someone who is familiar with RPC, with traditional internet protocols, A2A is very intuitive. You have a server; you have a client. All you need to learn is some specific concepts, like the agent card. (The agent card itself could be used to signal not only my capabilities but how I have been tested. You can even think of other metrics like uptime and success rate.) You need to understand the concept of a task. And then the remote agent will update on this task as defined—for example, every five minutes or [upon] completion of specific subtasks.
  • 29:52: A2A already supports JavaScript, TypeScript, Python, Java, and .NET. In ADK, the agent development kit, with one line of code we can define a new A2A agent.
  • 30:27: What is the current state of adoption?
  • 30:40: I should have looked at the PyPI download numbers.
  • 30:49: Are you aware of teams or companies starting to use A2A?
  • 30:55: I’ve worked with a customer with an insurance platform. I don’t know anything about insurance, but there’s the broker and the underwriter, which are usually two different companies. They were thinking about building an agent for each and having the agents talk via A2A
  • 31:32: Sokratis, what about you?
  • 31:40: The interest is there for sure. Three weeks ago, I presented [at] the Google Cloud London Summit with a big customer on the integration of A2A into their agentic platform, and we shared tens of customers, including the announcement from Microsoft. Many customers start implementing agents. At some point they lack integration across business units. Now they see the more agents they build, the more the need for A2A.
  • 32:32: A2A is now in the Linux Foundation, which makes it more attractive for companies to explore, adopt, and contribute to, because it’s no longer controlled by a single entity. So decision making will be shared across multiple entities.

❌
❌